Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-01-10T23:37:22.052Z Has data issue: false hasContentIssue false

Actively Open-Minded Thinking About Evidence (AOT-E) Scale: Adaptation and Evidence of Validity in a Brazilian Sample

Published online by Cambridge University Press:  10 January 2025

Andressa Bonafé-Pontes*
Affiliation:
Department of Social and Work Psychology, Institute of Psychology, University of Brasilia, Brasilia, Brazil National Institute of Science and Technology on Social and Affective Neuroscience (INCT-SANI), São Paulo, Brazil
Rafael Costa Bastos
Affiliation:
Department of Social and Work Psychology, Institute of Psychology, University of Brasilia, Brasilia, Brazil
Ronaldo Pilati
Affiliation:
Department of Social and Work Psychology, Institute of Psychology, University of Brasilia, Brasilia, Brazil National Institute of Science and Technology on Social and Affective Neuroscience (INCT-SANI), São Paulo, Brazil
*
Corresponding author: Andressa Bonafé-Pontes; Email: andressa.pontes@aluno.unb.br
Rights & Permissions [Opens in a new window]

Abstract

The present study aimed to adapt the actively open-minded thinking about evidence (AOT-E) scale to Brazilian Portuguese and assess its psychometric properties and nomological network in a Brazilian sample. It begins by investigating the underlying content structure of the AOT-E in its original form. Results from an exploratory factor analysis (EFA) of secondary data serve as the basis for the proposed adaptation, which is subjected to both EFA and confirmatory factor analysis (CFA). A total of 718 participants from various regions of Brazil completed an online survey that included the AOT-E, along with other instruments that allowed for an assessment of the scale’s nomological network, including measures of science literacy, attitude toward science, conspiracy beliefs, and religiosity. The EFA and CFA of both the original and Brazilian samples suggested a unidimensional solution. Despite the differences found in a multigroup CFA, polychoric correlations provided evidence of expected nomological relationships that replicate international findings. Overall, this study contributes to expanding the availability of adapted and validated research instruments in non-WEIRD samples.

Type
Empirical Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association for Decision Making

1. Introduction

The study of open-minded cognition has long been central to social psychology. This cognitive style reflects how people select and process information and is characterized by a willingness to consider various perspectives, opinions, and beliefs, including those that are contrary to one’s own (Ottati & Wilson, Reference Ottati and Wilson2018 ). It contrasts with a closed-minded thinking style, which easily falls prey to confirmation bias and is marked by the reinforcement of existing viewpoints (Klayman, Reference Klayman1995; Nickerson, Reference Nickerson1998). As a key element for promoting prosocial behavior, along with tolerance, empathy, and acceptance, open-mindedness has received continued attention from researchers and practitioners (see Price et al., Reference Price, Ottati, Wilson and Kim2015 for a comprehensive review). This is especially true in the context of growing polarization, in which the ability to consider other people’s perspectives has important consequences at the societal level (Moaz et al., Reference Moaz, Berryman, Winget, Tindale, Ottati, Ottati and Stern2023).

Despite the importance of this construct in cognitive and social psychology research, there have been, to the best of our knowledge, no attempts to create or adapt an instrument to measure open-mindedness in Brazil. This absence is indicative of a greater and more concerning shortage of research instruments adapted and validated for use in non-WEIRD (Western, educated, industrialized, rich, and democratic) countries. This unfortunate reality reinforces a pattern of overreliance on samples that are not representative of human behavior in its complexity and diversity (Henrich et al., Reference Henrich, Heine and Norenzayan2010; Rad et al., Reference Rad, Martingano and Ginges2018; Muthukrishna et al., Reference Muthukrishna, Bell, Henrich, Curtin, Gedranovich, McInerney and Thue2020; among many others).

We believe that an adaptation of the actively open-minded thinking about evidence (AOT-E) scale to Brazilian Portuguese could contribute to changing this scenario by helping to advance research in various domains. The scale could help understand an array of phenomena that have grown in importance over the past few years, particularly the spread of fake news and conspiracy thinking that have accompanied growing political polarization and antiscience beliefs (Galli & Modesto, Reference Galli and Modesto2021). To this end, the present study proposes an adaptation of the AOT-E scale (Pennycook et al., Reference Pennycook, Cheyne, Koehler and Fugelsang2020) to Brazilian Portuguese. We begin by carrying out an exploratory factor analysis (EFA) of the instrument in its original form (with data from Pennycook et al., Reference Pennycook, Cheyne, Koehler and Fugelsang2020) to have a better understanding of its underlying content structure. Findings from this initial investigation serve as the basis for the proposed adaptation, which is subsequently evaluated for its psychometric properties and nomological network in a Brazilian sample.

2. Actively Open-Minded Thinking Research and Measurement

Going beyond the mere consideration of arguments that are contrary to one’s favored position, actively open-minded thinking (AOT) means actually looking for them. As the proponent of this construct, Baron (Reference Baron1985, Reference Baron, Voss, Perkins and Segal1991–2008) described AOT as a prescriptive model that understands thinking as a search for possibilities (options, hypotheses), evidence (arguments, reasons), and goals (criteria, objectives).

Over the past three decades, research on individual differences in AOT has evolved, starting with qualitative studies on divisive issues, such as public school funding (Perkins et al., Reference Perkins, Bushey and Faraday1986) and abortion (Baron, Reference Baron1995). Efforts to measure thinking standards more directly (Stanovich & West, Reference Stanovich and West1997, Reference Stanovich and West1998) eventually led to the development of a 41-item AOT scale (Stanovich & West, Reference Stanovich and West2007) that has become the primary instrument for assessing AOT. Variations of this scale have been used in numerous studies, revealing negative correlations with beliefs in paranormal phenomena (Svedholm & Lindeman, Reference Svedholm and Lindeman2013), conspiracy theories (Swami et al., Reference Swami, Voracek, Stieger, Tran and Furnham2014), and superstitions (Svedholm & Lindeman, Reference Svedholm and Lindeman2017), while showing positive correlations with performance in various cognitive tasks (Haran et al., Reference Haran, Ritov and Mellers2013; Mellers et al., Reference Mellers, Stone, Atanasov, Roghbaugh, Metz, Ungar and Tetlock2015; Baron et al., Reference Baron, Scott, Fincher and Metz2015).

Despite the extensive evidence linking AOT to relevant variables, the factor structure of different AOT scale versions remains poorly documented. Janssen et al. (Reference Janssen, Verkoeijen, Heijltjes, Mainhard, van Peppen and van Gog2020) highlighted this gap, arguing that the assumption that AOT sum scores represent a single psychological trait is questionable. Their investigation of the scale’s psychometric properties raised concerns about its ability to differentiate respondents, suggesting that item sets do not reliably measure the intended latent trait. This raises doubts about whether AOT scores truly reflect the disposition they aim to assess (Janssen et al., Reference Janssen, Verkoeijen, Heijltjes, Mainhard, van Peppen and van Gog2020).

3. AOT About Evidence

The literature presented thus far inspired Pennycook et al. (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) to use AOT as a framework for examining the gap between the rational expectation that beliefs should be evidence-based and the reality in which belief formation and revision are often constrained by biases, heuristics, blind spots, and so on (Kahneman et al., Reference Kahneman, Slovic and Tversky1982; Kunda, Reference Kunda1990, Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012; Pronin et al., Reference Pronin, Lin and Ross2002; Perkins, Reference Perkins2019; Stanovich et al., Reference Stanovich, West and Toplak2013). The authors focused on individual differences in people’s “meta-belief” about whether beliefs ought to change in response to evidence, arguing that this predisposition significantly shapes the beliefs, opinions, and values that individuals hold. Pennycook et al. (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) underlined that “flexible thinking”, or the ability to question one’s own beliefs, is particularly crucial to the concept of AOT (Baron, Reference Baron2019) and has often been the focus of shortened versions of the scale (Baron et al., Reference Baron, Scott, Fincher and Metz2015; Haran et al., Reference Haran, Ritov and Mellers2013). Despite its centrality to the AOT’s predictive validity, however, this “meta-belief” and its wide-ranging consequences had not yet been systematically studied.

To address this research gap, Pennycook et al. (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) derived the AOT-E scale from Stanovich and West’s (Reference Stanovich and West2007) AOT scale, identifying eight items related to the concept of belief mutability across four subscales: AOT, belief identification, dogmatism, and openness values. Their validation studies showed that the proposed AOT-E scale had good reliability (Cronbach’s alpha = .77) and was positively correlated with the Need for Cognition (NFC) scale and the Cognitive Reflection Test (CRT) (r = .37 and .35, respectively). Negative correlations were observed with the Faith in Intuition (FI) scale and religious beliefs (r = −.22 and −.54, respectively). The authors further assessed the discriminant validity of the proposed AOT-E by examining its relationship with openness to experience and religious belief. A significant but somewhat weak correlation with openness to experience (r = .16) showed that these scales seem to measure distinct constructs. This conclusion was further corroborated by the absence of a significant correlation between openness to experience and religious belief, which contrasts with a strong and significant correlation between AOT-E and religious belief (r = −.48).

In subsequent studies, Pennycook et al. (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) examined how AOT-E levels affect reasoning and the types of beliefs that individuals hold. First, they explored the relationship between AOT-E and conspiratorial, moral, paranormal, political, religious, and scientific beliefs. Findings included inverse associations with belief in conspiratorial, religious, and paranormal claims. Participants with high AOT-E also held less traditional moral values, were less conservative across various political and economic issues, and were less likely to hold antiscience beliefs. These results highlighted AOT-E’s remarkable predictive power, with the majority of correlations above .30, which can be considered a threshold for large effect sizes in personality and social psychological research based on Gignac and Szodorai’s (Reference Gignac and Szodorai2016) meta-analysis of typical effect sizes in these domains. One’s meta-belief about belief mutability seems to play an important role in what is taken to be true or false. Specifically, it “appears to support the rejection of epistemically suspect beliefs” (Pennycook et al., Reference Pennycook, Cheyne, Koehler and Fugelsang2020, p. 492).

In a later study, AOT-E’s strong predictive validity held true, even when implementing a revised version that referenced “opinions” instead of “beliefs” to mitigate potential biases against religious individuals (Stanovich & Toplak, Reference Stanovich and Toplak2019). Moreover, the authors found that correlations between CRT scores and conspiratorial, moral, paranormal, political, religious, and science beliefs were weaker than those between both versions of the AOT-E and said beliefs. These findings corroborate the hypothesis that people’s stance toward revising their beliefs (and opinions) according to evidence influences what they ultimately believe.

Pennycook et al. (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) also investigated AOT-E’s predictive power across the political spectrum. The results revealed greater predictive validity among Democrats than among Republicans. Despite these results, the hypothesis of motivated reasoning, or identity protective cognition, was not supported, as there were no opposing main effects for Democrats versus Republicans in the great majority of topics that were presented.

4. Applications of the AOT-E Scale

Since its publication, the AOT-E scale has been used by numerous researchers across various contexts. For instance, Edgcumbe (Reference Edgcumbe2021) found negative correlations between both the AOT and AOT-E scales and age, concluding that reluctance to question long-established beliefs and opinions should have a growing societal impact as the population grows older.

In an investigation of the roles of political ideology, science knowledge, and cognitive sophistication in science beliefs, Pennycook et al. (Reference Pennycook, Bago and McPhetres2023) found significant, positive correlations between the revised AOT-E and both liberal and conservative pro-science beliefs (liberal pro-science beliefs included evolution, the Big Bang, and global warming, while conservative pro-science beliefs included GM foods and nuclear power safety and skepticism toward homeopathic medicine and acupuncture). AOT-E scores were also positively associated with general science knowledge, science methodology knowledge, and CRT scores. Significant negative correlations were found with religiosity, political ideology, political partisanship, and bullshit receptivity.

Research on COVID-19 misinformation has also highlighted the relevance of AOT-E. Bronstein et al. (Reference Bronstein, Kummerfeld, MacDonald and Vinogradov2021) found significant negative correlations between AOT-E scores and belief in both fake and real vaccine-related news containing antivaccine sentiments. Conversely, AOT-E scores were positively associated with media truth discernment and knowledge of SARS-CoV-2. Similarly, Bonafé-Pontes et al. (Reference Bonafé-Pontes, Couto, Kakinohana, Travain, Schimidt and Pilati2021) reported that greater AOT-E scores, along with trust in the WHO and traditional media, positively correlated with truth discernment in the context of COVID-19 misinformation via messaging applications in Brazil. It is worth noting that, despite having used an adapted version of the AOT-E to Brazilian Portuguese, Bonafé-Pontes et al. (Reference Bonafé-Pontes, Couto, Kakinohana, Travain, Schimidt and Pilati2021) did not report the translation and adaptation processes nor did they carry out validation analyses, an important gap that we intend to fill (a CFA of these data corroborates the unifactorial structure of the Brazilian version of the AOT-E; results are available in the Supplementary Materials page).

To the best of our knowledge, there has only been one other translation of the (revised) AOT-E (Brzóska et al., Reference Brzóska, Nowak, Piotrowski, Żemojtel-Piotrowska, Sawicki and Jonason2022). The Polish adaptation showed good reliability (α = .91); however, additional psychometric data were not reported. The study reported evidence of the mediating role of open-minded thinking about evidence in the relationship between misperceptions about COVID-19, religiosity, and spirituality. Structural equation modeling (SEM) revealed that religiosity was associated with an increased belief in conspiratorial and false factual beliefs about COVID-19 through less analytic thinking and AOT-E, while nonreligious spirituality was associated with a decreased belief in misperceptions about COVID-19 through more analytic thinking and AOT-E.

5. Adaptation of the AOT-E Scale to Brazilian Portuguese

The present study starts by investigating the underlying content structure of the original AOT-E in its initial application by Pennycook et al. (Reference Pennycook, Cheyne, Koehler and Fugelsang2020)—specifically study 1, an online survey designed to explore AOT-E’s association with religious, political, conspiratorial, paranormal, moral, and science beliefs. Based on our findings from an EFA (study 1), along with the evidence of validity presented by the authors and reviewed above, we propose an adaptation of the scale to Brazilian Portuguese. In study 2, we describe the translation and adaptation processes and evaluate the psychometric properties of the proposed adaptation through EFA, CFA, and multigroup CFA. Given the reduced number of items, we expect to find a unidimensional structure both in Pennycook et al.’s (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) data and in the Brazilian adaptation.

Study 2 also investigates the nomological relationships surrounding the Brazilian version of the AOT-E scale. A nomological network is understood as an integrative theoretical framework that outlines the main constructs associated with a particular phenomenon and characterizes the relationships between them (APA, n.d.). It emphasizes the interconnected nature of theoretical constructs and helps better understand specific phenomena by shedding light on the lawful and reasoned patterns of relationships within the network (Cronbach & Meehl, Reference Cronbach and Meehl1955). Aiming to build this broad conceptual framework for the Brazilian AOT-E, we investigated its correlations with pro-science measures, conspiratorial beliefs, and religiosity. The selection of these particular variables was based on their presence in international AOT-E literature, as reviewed above. The validation of the proposed nomological network is demonstrated by significant correlations that replicate both the direction and (approximate) magnitude of previous findings (i.e., direct correlations with pro-science measures and inverse correlations with conspiratorial and religious beliefs).

Sample size determination, data manipulation, exclusions, and all measures in the studies are reported below. All data and materials are available at the Open Science Framework (https://osf.io/fw5mg/?view_only=3ad6bceb8fe24ea89af3c8313ff1c62c).

6. Study 1

6.1. Method

6.1.1. Participants

We utilized data from Pennycook et al.’s (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) study 1. As pointed out by the authors, 380 American participants were recruited from Mechanical Turk on February 18, 2016. After exclusions, the resulting sample (N = 375, Mean age = 35.8) was 57.6% male and 42.2% female.

6.1.2. Materials

Please refer to Pennycook et al. (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) for detailed study design information. Despite having been accompanied by other measurements, we report here the sole variable of interest for our purposes.

6.1.2.1. Original AOT-E Scale

The AOT-E contains eight items, which are listed along with their respective factor loadings in Table 1. Participants responded on a scale of 1 (strongly disagree) to 6 (strongly agree). The AOT-E score was obtained by adding the item scores after the reversal of items 3, 4, 5, 7, and 8.

Table 1 Exploratory factor analysis of the original AOT-E

6.1.3. Procedures

We had access to the dataset through Pennycook et al.’s (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) Supplementary Materials page in the Open Science Framework (https://osf.io/fah57). For detailed information on the study procedures, please refer to the original manuscript.

6.2. Results

We performed EFA in FACTOR (version 12.01.02). Given the absence of multivariate normality (Mardia’s skewness and kurtosis tests were significant, p < .001) and the ordinal nature of data, we implemented the Robust Diagonally Weighted Least Squares (RDWLS; Asparouhov & Muthen, Reference Asparouhov and Muthen2010; Li, Reference Li2016) extraction method and a polychoric matrix with 500 bootstrap samplings. The correlation matrix was deemed appropriate for the EFA with a Kaiser-Meyer-Olkin (KMO) test of .90, 95% bias-corrected (BC) confidence interval (CI) [.86, .91]. This result puts the sample considerably above the acceptable rate of .70 (Damásio, Reference Damásio2012).

The Parallel Analysis with Optimal Implementation (PA; Timmerman & Lorenzo-Seva, Reference Timmerman and Lorenzo-Seva2011) technique suggested a unidimensional solution, considering the real-data percentage of variance, the mean of the random percentage of variance, and the 95 percentile of the random percentage of variance. Unidimensionality was further confirmed by all three indices that integrate FACTOR’s Closeness to Unidimensionality Assessment, namely, the Unidimensional Congruence (UniCo = .98, 95% BC CI [.97, .99]); the Explained Common Variance (ECV = .87, 95% BC CI [.84, .91]); and the Mean of Item Residual Absolute Loadings (MIREAL = .27, 95% BC CI [.21, .31]) (Ferrando & Lorenzo-Seva, Reference Ferrando and Lorenzo-Seva2018).

Table 1 presents the factor loadings for each item, explained variance, eigenvalue, and reliability coefficients of the analysis. Reversed items are followed by “(R)”.

Construct replicability was corroborated by a high generalized H Index (H-latent = .92, 95% BC CI [.90, .93]; H-observed = .91 95% BC CI [.89, .93]). High H values (>.80) are indicative of a well-defined latent variable, which has a greater chance of remaining stable across studies (Ferrando & Lorenzo-Seva, Reference Ferrando and Lorenzo-Seva2018).

Goodness-of-fit indices were derived using semiconfirmatory factor analysis (SCFA). While the root-mean-square error of approximation (RMSEA) was considered poor (RMSEA = .11 95% BC CI [.07, .14]), the comparative fit index (CFI), and the Tucker-Lewis index (TLI) were adequate (CFI = .98 95% BC CI [.96, .99]; TLI = .97 95% BC CI [.95, .99]).

7. Study 2

7.1. Method

7.1.1. Translation and Adaptation of the AOT-E

The original AOT-E scale was translated and adapted to Brazilian Portuguese, following the best practices suggested by the International Test Commission (ITC, 2017) and Borsa et al. (Reference Borsa, Damásio and Bandeira2012). Two native Brazilian Portuguese speakers were responsible for the initial translation, which consisted of a synthesis of their proposed redactions, considering the contextual and conceptual adequacy of each item. A group of social psychology experts evaluated this initial version and suggested minor changes to improve clarity and understanding. After incorporating these changes, the resulting scale was back translated by an independent translator. The back-translated version was evaluated for its proximity to the original scale along with semantic, idiomatic, experiential, and contextual correspondence. It is worth noting that the translation and adaptation efforts described here took place in 2020, which allowed for the application of the scale in a series of studies carried out since, including Bonafé-Pontes et al. (Reference Bonafé-Pontes, Couto, Kakinohana, Travain, Schimidt and Pilati2021). The Brazilian Portuguese adaptation of the original AOT-E is presented in Table 2, along with the factor loadings and other relevant measures.

Table 2 Exploratory factor analysis of the Brazilian Portuguese version of the original AOT-E

7.1.2. Participants

Data were collected between March 2 and April 8, 2022. Voluntary participants were recruited through paid advertising on Facebook. A total of 787 participants from all regions of Brazil completed the study. Sixteen participants were duplicates, and 53 failed the attention checks. After exclusions, the final sample amounted to 718 participants (mean age = 42.72, SD = 15.4; 36.1% female, 62.8% male, 1.1% other). Of these, 7% had some high school or less, 17% had finished high school, 22% had some college but no degree, 28% had a bachelor’s degree and 29% had a graduate degree. The sample showed considerable variability across the political spectrum (M = 48.14, SD = 28.52), although it was predominantly nonreligious (M = 37.19, SD = 34.42).

Even after being divided in half, our sample size surpasses the 10:1 ratio of participants per item commonly suggested in the literature and exceeds the minimum of 200 participants, which is commonly recommended for factor analysis (Clark & Watson, Reference Clark and Watson1995; DeVellis, Reference DeVellis2003; Hair et al., Reference Hair, Black, Babin, Anderson and Tatham2005).

7.1.3. Measures

Items were measured on a slider scale ranging from 0 (totally disagree) to 100 (totally agree), unless otherwise stated. It is worth noting that evidence on the impact of continuous rating scales versus Likert-type items is mixed (for a review, see Chyung et al., Reference Chyung, Swanson, Roberts and Hankinson2018 and Roster et al., Reference Roster, Lucianetti and Albaum2015). However, some research points to enhanced precision and sensibility, greater participant engagement, ease of use, and higher chances of having normally distributed data (Funke & Reips, Reference Funke and Reips2012; Voutilainen et al., 2016; Reips & Funke, Reference Reips and Funke2008). Our own research experience corroborates these tendencies and, therefore, we decided to use sliders.

7.1.3.1. Brazilian Version of the Original AOT-E Scale

As presented in Table 2.

7.1.3.2. Attitude Toward Science Scale (ATSS)

Developed by Novaes et al. (Reference Novaes, Bienemann, Paveltchuk, Siqueira and Damásio2019), the original ATSS has 42 items and a bi-dimensional structure that contemplates (1) beliefs and affects and (2) personal initiative. Given the length of the scale, we applied a reduced version that included the ten items with the highest factor loadings (above .7) in previous studies. “Science is essential to human development” and “I like to read about science” are examples of items in the reduced ATSS. Similar to the original study, the scale presented a bi-dimensional structure and good reliability (ω = .78; factor loadings between .47 and .85).

7.1.3.3. Scientific Literacy

Nine true/false items were used to measure the participants’ scientific knowledge. These items have been translated into Brazilian Portuguese from Kahan et al. (Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012) and Rutjens et al. (Reference Rutjens, Sutton and van der Lee2018). Examples include “The center of the Earth is very hot” and “All human-made chemicals can cause cancer”. This scale had a unifactorial structure and good reliability (ω = .82; factor loadings between .34 and .77).

7.1.3.4. General Conspiracy Belief Scale (GCBS)

Developed by Rezende et al. (Reference Rezende, Gouveia, Coelho, Freires, Loureto and Cavalcanti2021), the scale is composed of 15 items, including “New drugs and technologies are routinely tested in people without their knowledge” and “Secret organizations are in contact with extraterrestrials but keep it a secret”. Inspired by Brotherton et al.’s (Reference Brotherton, French and Pickering2013) Generic Conspiracist Beliefs Scale, this instrument was created specifically for Brazilian samples and was thus considered a better fit for our purposes. In our study, the scale showed good reliability (ω = .91; factor loadings between .40 and .90).

7.1.3.5. Political Orientation and Religiosity

Participants were asked to indicate their position in the political spectrum and their level of religiosity on two slider scales that ranged from “extremely to the left” and “not religious at all” (0) to “extremely to the right” and “extremely religious” (100), respectively.

7.1.3.6. Sociodemographic Questions

Participants were asked questions regarding their gender, age, state of residence, and level of education.

7.1.3.7. Attention Check

Following the best research practices (Gummer et al., Reference Gummer, Roßmann and Silber2021), we included two screening questions. The first instructed participants to select “Completely Disagree” to indicate that they were effectively paying attention to the survey. The second asked participants to identify the topic of the text presented in the experimental task.

7.1.4. Procedures and Analyses

The participants completed an online survey using Qualtrics. The above metrics were part of a larger experimental study focusing on the perceived safety of genetically modified foods. The reduced ATSS and scientific literacy items were presented, in that order, prior to the experimental intervention. The AOT-E, the GCBS, along with political orientation, religiosity, and sociodemographic questions were presented, in that order, after the experimental task. For detailed information on the study design, please refer to the Supplementary Materials.

We randomly divided our total sample into two halves to perform an EFA (n = 359), followed by a confirmatory factor analysis (CFA; n = 359). A different, randomly selected approximate half was utilized to implement the multigroup CFA (n = 362), which was combined with the data from study 1. EFA was implemented through FACTOR version 12.04.05, and both CFA and multigroup CFA were implemented through JASP version 0.17.1, which is itself based on the lavaan package for R. Correlations were performed for the entire sample (N = 718) using R.

7.2. Results

Similarly to what was done in study 1, we implemented the RDWLS extraction method and a polychoric matrix with 500 bootstrap samplings (despite the continuous nature of our data, we decided to keep the extraction method consistent across studies for greater comparability; alternative analyses with MLR extraction are available as Supplementary Material). The correlation matrix was once again considered appropriate for EFA with a KMO test of .87, 95% BC CI [.79, .89].

The Parallel Analysis with Optimal Implementation technique suggested a unidimensional solution, which was corroborated by the Closeness to Unidimensionality Assessment (UniCo = .98, 95% BC CI [.97, .99]; ECV = .88, 95% BC CI [.85, .91]; and MIREAL = .19, 95% BC CI [.15, .21].

Table 2 presents the factor loadings for each item, explained variance, eigenvalue, and reliability coefficients for the analysis (please note that the items are presented in the same order as the English version in Table 1). Reversed items are followed by “(R)”.

Construct replicability was confirmed using a high generalized H Index (H-latent = .89, 95% BC CI [.86, .91]; H-observed = .85 95% BC CI [.79, .92]). Goodness-of-fit indices were derived from an SCFA and considered adequate (RMSEA = .03 95% BC CI [.00, .04], CFI = .99 95% BC CI [.99, 1.00]; TLI = .99 95% BC CI [.99, 1.00]).

Similar to previous analyses, we performed CFA with DWLS estimation and robust error calculation (95% CI) (MLR estimation available as Supplementary Material). The goodness-of-fit statistics are reported in Table 3 and corroborated the unidimensional nature of the scale.

Table 3 CFA goodness-of-fit indices for the Brazilian Portuguese version of the AOT-E

Note: χ2 = robust mean and variance-adjusted Chi square; df = degrees of freedom;

* p < .001.

Next, we performed a multigroup CFA to examine whether there was measurement invariance across the samples. Measurement invariance, or equivalence, seeks to evaluate how comparable the measured traits are across various populations. It is typically assessed by comparing a series of increasingly constrained factor models (for a conceptual and methodological overview, see Leitgöb et al., Reference Leitgöb, Seddig, Asparouhov, Behr, Davidov, De Roover, Jak, Meitinger, Menold, Muthén, Rudnev, Schmidt and van de Schoot2023; also see Davidov et al., Reference Davidov, Meuleman, Cieciuch, Schmidt and Billiet2014; de Van de Schoot et al., Reference Van de Schoot, Lugtig and Hox2012). The process begins with configural invariance, which tests a distinct model for each group without imposing constraints, ensuring that the same model is valid across groups with the same latent variables and indicator relationships, although the loading strengths may differ. Next, metric invariance (or weak measurement invariance) examines whether factor loadings are consistent across groups, indicating similar relationships between the indicators and latent factors. If confirmed, this suggests that the latent construct has the same meaning across the groups. Finally, scalar invariance (or strong measurement invariance) requires both equal factor loadings and invariant indicator intercepts. Testing scalar invariance involves constraining the intercepts to be equal, allowing the observed mean differences in indicators to be attributed to actual differences in the latent construct. CFI, TLI, and RMSEA are commonly used to assess the degree to which the model, with its specified level of invariance, fits the data. Significant changes in these fit indices across increasingly constrained models can indicate a lack of invariance (Chen, Reference Chen2007; Khademi et al., Reference Khademi, Wells, Oliveri and Villalonga-Olives2023).

For the multigroup CFA, responses from the Brazilian sample were transformed from 0–100 to 1–6 to match those from study 1 (we did so by dividing 100 by 6 and having each score match groups 1–6 according to the range within which they stood). Given the ordinal nature of the data, the analyses were performed using DWLS estimation and robust error calculation (95% CI). Table 4 presents the goodness-of-fit statistics for the three models, which aimed to test configural, metric, and scalar invariance. Although these indices are generally considered adequate, the differences between the models are greater than the commonly suggested cutoff of .01 (Fischer & Karl, Reference Fischer and Karl2019), signaling the absence of metric and scalar invariance.

Table 4 Multigroup CFA goodness-of-fit indices

Note: N = 737; df = degrees of freedom;

* p < .05

** p <.001

Subsequently, we performed polychoric correlations to assess the nomological validity of the Brazilian Portuguese version of the AOT-E. Further exploratory correlations were performed using the items related to political orientation, education, and age. Table 5 presents the correlation coefficients and confidence intervals. Overall, the magnitude of the correlations indicates moderate to strong relationships, and the results follow the expected directions based on the previously discussed evidence. Pro-science attitudes, scientific literacy, and education were positively associated with AOT-E scores. Meanwhile, conspiracy beliefs, political orientation to the right, religiosity, and age were negatively correlated with AOT-E (although age was only marginally correlated).

Table 5 Polychoric correlation coefficients

Note: ATSS = attitude toward science scale. GCBS = general conspiracy belief scale.

* Correlation is significant at the .05 level (2-tailed). 0–100 scores were divided into eight categories to allow for polychoric analyses using R.

*** Correlation is significant at the .001 level (2-tailed).

8. Discussion

This study aimed to propose an adaptation of the original AOT-E scale to Brazilian Portuguese, assess its psychometric properties, and search for evidence of its validity in a Brazilian sample. To achieve this goal, we started by filling a gap in Pennycook et al.’s (Reference Pennycook, Cheyne, Koehler and Fugelsang2020) psychometric analysis. Specifically, we performed an EFA to elucidate the underlying content structure of the original AOT-E in its initial application. For both the original and Brazilian samples, EFA procedures (i.e., PA and Closeness to Unidimensionality Assessment) indicated a unidimensional solution, which was further corroborated by CFA analyses.

Regarding factor loadings, in both the original and Brazilian Portuguese adaptations of the scale, no items were below Brown’s (Reference Brown2006) cutoff value (< .30). A possible exception is item 5 of the latter version, which was very close to that threshold and could thus suggest the need for revision. It is possible that the choice of verbs and their conjugation in Brazilian Portuguese may have caused confusion. However, goodness-of-fit indices from the SCFA and CFA indicate the adequacy of the scale. It is interesting to observe that despite the differences between the samples, items 2 and 4 were consistently among the highest loadings, while items 1 and 7 showed weaker loadings in both versions.

Regardless of these similarities, the results from the multigroup CFA lead us to believe that the invariance between the American and Brazilian samples is limited to the configural realm. The deterioration-of-fit indices across models raise concerns regarding both metric and (to a greater extent) scalar invariance. In other words, although the factor model can be applied to both groups, factor loadings and intercepts vary across samples. It is worth noting, however, that the appropriate cutoff for the deterioration-of-fit indices across models has been the subject of debate (Khademi et al., Reference Khademi, Wells, Oliveri and Villalonga-Olives2023), and that metric invariance would be achieved under more lenient standards (i.e., .02 cutoff).

When examining potential sources of nonequivalence, the literature identifies three broad categories for investigation: construct, method, and item biases (van de Vijver, Reference Van de Vijver1998). The first category addresses conceptual discrepancies, while the second encompasses a range of methodological issues, including variations in sampling procedures, response styles, and questionnaire administration. The third category focuses on item-level anomalies such as inadequate translations and culture-specific content (Davidov et al., Reference Davidov, Meuleman, Cieciuch, Schmidt and Billiet2014; Fischer & Karl, Reference Fischer and Karl2019).

Although further investigation is necessary, we can speculate about the potential causes of noninvariance in our study. For starters, different formats (1–6 Likert scale versus 0–100 slider scale) could have impacted the responses. Future research should implement the original configuration to eliminate this confounding factor. In addition, the studies were run in very different contexts with a considerable time span between them. Notably, the COVID-19 pandemic occurred between the two studies and may have impacted people’s views on how evidence should shape their beliefs.

It is also worth noting that the implications of scalar noninvariance are themselves debatable. While some researchers argue that nonequivalence hinders the ability to make meaningful comparisons of indicators and factor means across groups (e.g., Steenkamp & Baumgartner, Reference Steenkamp and Baumgartner1998; Van de Schoot et al., Reference Van de Schoot, Lugtig and Hox2012; Church et al., Reference Church, Alvarez, Mai, French, Katigbak and Ortiz2011), others disagree with this view. This opposing perspective recognizes that measurement noninvariance is pervasive in cross-cultural studies and suggests that it is worthy of investigation. Rather than precluding meaningful comparisons altogether, the lack of scalar invariance can, in some instances, result from actual differences across groups, and should therefore be carefully investigated (Davidov et al., Reference Davidov, Meuleman, Cieciuch, Schmidt and Billiet2014; McCrae, Reference McCrae2015; Vandenberg & Lance, Reference Vandenberg and Lance2000; Thielmann et al., Reference Thielmann, Akrami, Babarović, Belloch, Bergh, Chirumbolo, Čolović, De Vries, Dostál, Egorova, Gnisci, Heydasch, Hilbig, Hsu, Izdebski, Leone, Marcus, Međedović, Nagy and Lee2019). We hope that future studies will further examine the results of our multigroup CFA and shed light on the issues at play.

Despite these results, we emphasize that the scale showed good reliability across all the assessed indices, (i.e., Cronbach’s alpha and McDonald’s omega) in both samples. Similarly, H-indexes suggest a well-defined latent variable, which makes us optimistic about construct replicability across future studies and samples (Ferrando & Lorenzo-Seva, Reference Ferrando and Lorenzo-Seva2018).

Furthermore, evidence of nomological validity was found in the Brazilian sample. Specifically, we replicated previous findings regarding a positive correlation between pro-science and science knowledge measures (Pennycook et al., Reference Pennycook, Bago and McPhetres2023), and inverse correlations with conspiratorial beliefs and religiosity (Pennycook et al., Reference Pennycook, Cheyne, Koehler and Fugelsang2020; Newton, Reference Newton2020; Pennycook et al., Reference Pennycook, Bago and McPhetres2023). Evidence-based revision is a cornerstone of scientific endeavors; thus, it is not surprising that individuals with greater AOT-E also have a greater affinity for science. Conversely, conspiracist and religious mentality are often marked by disregard of evidence that could undermine a preferred perspective, a stance that suggests lower levels of AOT-E.

Correlations with sociodemographic variables showed that less-educated and older participants had lower AOT-E scores (the latter corroborates Edgcumbe’s (Reference Edgcumbe2021) finding, although our effect size was minimal). Finally, right-wing leaning participants also had lower AOT-E, as previously reported in another Brazilian sample (Bonafé-Pontes et al., Reference Bonafé-Pontes, Couto, Kakinohana, Travain, Schimidt and Pilati2021).

The main limitation of this study pertains to the sampling, especially the lack of adequate representation of the Brazilian population in study 2. The majority of participants identified as male (62.8%) and were highly educated (78.9% had at least started undergraduate studies). A considerable portion (26.5%) reported not to be religious at all and 64.7% responded below the midpoint in our slider scale that ranged from “not religious at all” (0) to “extremely religious” (100). This is particularly unexpected in a country where, in 2010 (the latest census data available), 64.6% of the population described themselves as Catholic, 22.2% as Protestant, and 8% as nonreligious (Agência IBGE, 2012). Despite these biases, other sociodemographic characteristics had a more balanced distribution, particularly state of residence and age.

Another limitation refers to our decision to focus solely on the original version of the AOT-E rather than including the revised version that uses the word “opinions” rather than “beliefs”. As underlined by Stanovich and Toplak (Reference Stanovich and Toplak2019), this choice may leave room for misinterpretation (i.e., understanding “beliefs” as religious beliefs), which could lead to inflated results among religious individuals. However, we argue that this was not the case in our sample, which had a considerable proportion of nonreligious people, as previously reported. Furthermore, our preference for the original version is justified by a perceived greater applicability to the Brazilian lexicon. Specifically, opinião and the related verb achar (in English, opinion and to think) have a more transient connotation than crença and the related acreditar (in English, belief and to believe). When considering the different versions of the scale, we concluded that the revised one could have little discriminative power, as opiniões (opinions) are commonly seen as changeable and revisable by nature, often not requiring a solid basis on evidence—one of the definitions of opinião is an “unfounded idea or hypothesis; that which is presumed without certainty; presumption” (Michaelis, Reference Michaelisn.d.). We do acknowledge, however, that it would be interesting for future studies to further investigate the psychometric properties of the revised AOT-E.

Despite these shortcomings, this study also has considerable strengths. Overall, it defies the tendency to overlook psychometric issues that have been pervasive in the AOT literature, as demonstrated by Janssen et al. (Reference Janssen, Verkoeijen, Heijltjes, Mainhard, van Peppen and van Gog2020). In this regard, we performed analyses that were lacking in the original application of the AOT-E, which could help inform future applications of the scale for English-speaking samples. As for our adaptation process, we followed the best practices and guidelines and performed robust data analysis procedures. Throughout the studies, we favored the implementation of polychoric correlations (Muthen & Kaplan, Reference Muthen and Kaplan1985) and corrections for non-normality (Asparouhov & Muthen, Reference Asparouhov and Muthen2010), which increased the robustness of the analyses. Furthermore, all of our data and analyses are publicly available in conformance with open science principles.

Perhaps even more importantly, the present study makes a small contribution to fighting the chronic shortage of research instruments adapted and validated for use in non-WEIRD samples. The study of AOT, in general, and AOT-E, in particular, can greatly contribute to understanding the hallmarks of open-minded cognition worldwide. Having an adapted and validated instrument to measure individual differences in how people perceive the role of evidence in shaping their beliefs can open the way for important research in Brazilian samples.

Brazil has recently faced severe challenges, including an unprecedented outbreak of dengue fever that surpassed 5 million cases between January and May 2024 (Ministério da Saúde, 2024), and catastrophic weather events that displaced over 600,000 people in the state of Rio Grande do Sul (Trindade, Reference Trindade2024). The success of efforts to overcome the country’s complex socioeconomic issues goes hand-in-hand with openness to scientific evidence (particularly regarding vaccination and climate change). Research focusing on understanding AOT-E levels across the Brazilian population could help shape future initiatives and increase their chances of contributing to a brighter future.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/jdm.2024.37.

Data availability statement

The datasets and coding for this article are available on the Open Science Framework at https://osf.io/fw5mg/?view_only=3ad6bceb8fe24ea89af3c8313ff1c62c.

Financial support

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES)—Finance Code 001 for the first author and by a Research of Productivity Grant (305259/2022-9) from National Council of Scientific and Technological Development (CNPq) for the third author.

Competing interest

None.

References

APA (n.d.). Nomological network. In APA dictionary of psychology. Retrieved June 14, 2024, from https://dictionary.apa.org/nomological-network Google Scholar
Asparouhov, T., & Muthen, B. (2010). Simple second order chi-square correction [Unpublished manuscript]. https://www.statmodel.com/download/WLSMV_new_chi21.pdf Google Scholar
Baron, J. (1985). Rationality and intelligence. Cambridge University Press. https://doi.org/10.1017/CBO9780511571275CrossRefGoogle Scholar
Baron, J. (1991). Beliefs about thinking. In Voss, J. F., Perkins, D. N., & Segal, J. W. (Eds.), Informal reasoning and education (pp. 169186). Erlbaum. https://doi.org/10.4324/9780203052228 Google Scholar
Baron, J. (1993). Why teach thinking? — An essay. (Target article with commentary.) Applied Psychology: An International Review, 42, 191237. https://doi.org/10.1111/j.1464-0597.1993.tb00731.x CrossRefGoogle Scholar
Baron, J. (1995). Myside bias in thinking about abortion. Thinking and Reasoning, 1, 221235. https://doi.org/10.1080/13546789508256909 CrossRefGoogle Scholar
Baron, J. (2019). Actively open-minded thinking in politics. Cognition, 188, 818. https://doi.org/10.1016/j.cognition.2018.10.004 CrossRefGoogle ScholarPubMed
Baron, J., Scott, S., Fincher, K., & Metz, S. E. (2015). Why does the Cognitive Reflection Test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition, 4(3), 265284. https://doi.org/10.1016/j.jarmac.2014.09.003 CrossRefGoogle Scholar
Bonafé-Pontes, A., Couto, C., Kakinohana, R., Travain, M., Schimidt, L., & Pilati, R. (2021). COVID-19 as infodemic: The impact of political orientation and open-mindedness on the discernment of misinformation in WhatsApp. Judgment and Decision Making, 16(6), 22. https://doi.org/10.1017/S193029750000855X CrossRefGoogle Scholar
Borsa, J. C., Damásio, B. F., & Bandeira, D. R. (2012). Cross-cultural adaptation and validation of psychological instruments: Some considerations. Paidéia, 22(53), 423432. https://doi.org/10.1590/1982-43272253201314 CrossRefGoogle Scholar
Bronstein, M., Kummerfeld, E., MacDonald, A. III, & Vinogradov, S. (2021). Investigating the Impact of Anti-Vaccine News on SARS-CoV-2 Vaccine Intentions. SSRN. https://ssrn.com/abstract=3936927 CrossRefGoogle Scholar
Brotherton, R., French, C. C., & Pickering, A. D. (2013). Measuring belief in conspiracy theories: The generic conspiracist beliefs scale. Frontiers in Psychology, 4, Article 279. https://doi.org/10.3389/fpsyg.2013.00279 CrossRefGoogle ScholarPubMed
Brown, T. A. (2006). Confirmatory factor analysis for applied research. Guilford Press.Google Scholar
Brzóska, P., Nowak, B., Piotrowski, J., Żemojtel-Piotrowska, M., Sawicki, A. J., & Jonason, P. (2022). Different paths to COVID-19’s conspiratorial beliefs for the religious and spiritual: The roles of analytic and open-minded thinking. PsyArXiv https://doi.org/10.31234/osf.io/zsv89 CrossRefGoogle Scholar
Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309319. http://doi.org/10.1037/1040-3590.7.3.309 CrossRefGoogle Scholar
Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14(3), 464504. https://doi.org/10.1080/107055107013 CrossRefGoogle Scholar
Church, A. T., Alvarez, J. M., Mai, N. T. Q., French, B. F., Katigbak, M. S., & Ortiz, F. A. (2011). Are cross-cultural comparisons of personality profiles meaningful? Differential item and facet functioning in the revised NEO personality inventory. Journal of Personality and Social Psychology, 101(5), 10681089. https://doi.org/10.1037/a0025290 CrossRefGoogle ScholarPubMed
Chyung, S. Y., Swanson, I., Roberts, K. and Hankinson, A. (2018), Evidence-based survey design: The use of continuous rating scales in surveys. Performance Improvement, 57. 3848. https://doi.org/10.1002/pfi.21763 CrossRefGoogle Scholar
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281302. https://doi.org/10.1037/h0040957 CrossRefGoogle ScholarPubMed
Damásio, B. F. (2012). Uses of exploratory factorial analysis in Psychology. Avaliação Psicológica, 11(2), 213228. http://pepsic.bvsalud.org/pdf/avp/v11n2/v11n2a07.pdf Google Scholar
Davidov, E., Meuleman, B., Cieciuch, J., Schmidt, P., & Billiet, J. (2014). Measurement Equivalence in Cross-National Research. Annual Review of Sociology, 40(1), 5575. http://doi.org/10.1146/annurev-soc-071913-043137 CrossRefGoogle Scholar
DeVellis, R. F. (2003). Scale development: Theory and applications (2nd ed.). Sage Publications.Google Scholar
Edgcumbe, D. R. (2021). Age differences in open-mindedness: From 18 to 87-years of age, Experimental Aging Research, 48(1), 2441. https://doi.org/10.1080/0361073X.2021.1923330 CrossRefGoogle ScholarPubMed
Ferrando, P. J., & Lorenzo-Seva, U. (2018). Assessing the quality and appropriateness of factor solutions and factor score estimates in exploratory item factor analysis. Educational and Psychological Measurement, 78, 762780. https://doi.org/10.1177/0013164417719308 CrossRefGoogle ScholarPubMed
Fischer, R., & Karl, J. A. (2019). A Primer to (cross-cultural) multi-group invariance testing possibilities in R. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01507 CrossRefGoogle ScholarPubMed
Funke, F., & Reips, U.-D. (2012). Why semantic differentials in web-based research should be made from visual analogue scales and not from 5-point scales. Field Methods, 24(3), 310327. https://doi.org/10.1177/1525822X12444061CrossRefGoogle Scholar
Galli, L., & Modesto, J. (2021). A Influência das Crenças Conspiratórias e Orientação Política na Vacinação. Revista de Psicologia da IMED, 13(1), 179193. https://doi.org/10.18256/2175-5027.2021.v13i1.4491 CrossRefGoogle Scholar
Gignac, G. E., & Szodorai, E. T. (2016). Effect size guidelines for individual differences researchers. Personality and Individual Differences, 102, 7478. https://doi.org/10.1016/j.paid.2016.06.069 CrossRefGoogle Scholar
Gummer, T., Roßmann, J., & Silber, H. (2021). Using instructed response items as attention checks in web surveys: Properties and implementation. Sociological Methods & Research, 50(1), 238264. https://doi.org/10.1177/0049124118769083 CrossRefGoogle Scholar
Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2005). Análise multivariada de dados [Multivariate data analysis]. Bookman.Google Scholar
Haran, U., Ritov, I., & Mellers, B. A. (2013). The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making, 8(3), 188201. https://doi.org/10.1017/S1930297500005921 CrossRefGoogle Scholar
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 6183. https://doi.org/10.1017/S0140525X0999152X CrossRefGoogle ScholarPubMed
International Test Commission. (2017). The ITC guidelines for translating and adapting tests (2nd ed.). https://www.intestcom.org/ Google Scholar
Janssen, E. M., Verkoeijen, P. P., Heijltjes, A. E., Mainhard, T., van Peppen, L. M., & van Gog, T. (2020). Psychometric properties of the Actively Open-minded Thinking scale. Thinking Skills and Creativity, 36. https://doi.org/10.1016/j.tsc.2020.100659 CrossRefGoogle Scholar
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2(10), 732735. https://doi.org/10.1038/nclimate1547 CrossRefGoogle Scholar
Kahneman, D, Slovic, P., & Tversky, A. (1982). Judgments under uncertainty: Heuristics and biases. Cambridge University Press. https://doi.org/10.1017/CBO9780511809477 CrossRefGoogle Scholar
Khademi, A., Wells, C. S., Oliveri, M. E., & Villalonga-Olives, E. (2023). Examining Appropriacy of CFI and TLI Cutoff Value in Multiple-Group CFA Test of Measurement Invariance to Enhance Accuracy of Test Score Interpretation. Sage Open, 13(4). https://doi.org/10.1177/21582440231205354 CrossRefGoogle Scholar
Klayman, J. (1995). Varieties of confirmation bias. The Psychology of Learning and Motivation, 32, 385417. https://doi.org/10.1016/S0079-7421(08)60315-1 CrossRefGoogle Scholar
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480498. https://doi.org/10.1037/0033-2909.108.3.480 CrossRefGoogle ScholarPubMed
Leitgöb, H., Seddig, D., Asparouhov, T., Behr, D., Davidov, E., De Roover, K., Jak, S., Meitinger, K., Menold, N., Muthén, B., Rudnev, M., Schmidt, P., van de Schoot, R. (2023). Measurement invariance in the social sciences: Historical development, methodological challenges, state of the art, and future perspectives. Social Science Research, 110. https://doi.org/10.1016/j.ssresearch.2022.102805 CrossRefGoogle ScholarPubMed
Li, CH. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavioral Research Methods, 48, 936949. https://doi.org/10.3758/s13428-015-0619-7 CrossRefGoogle ScholarPubMed
McCrae, R. R. (2015). A more nuanced view of reliability. Personality and Social Psychology Review, 19, 112—97. doi:10.1177/1088868314541857 CrossRefGoogle ScholarPubMed
Mellers, B. A., Stone, E., Atanasov, P., Roghbaugh, N., Metz, S. E., Ungar, L., & Tetlock, P. E. (2015). The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of Experimental Psychology: Applied , 21, 114. https://doi.org/10.1037/xap0000040 Google ScholarPubMed
Michaelis, (n.d.). Opinião. In Michaelis Dicionário de Língua Portuguesa. Retrieved June 14, 2024, from https://michaelis.uol.com.br/moderno-portugues/busca/portugues-brasileiro/opini%C3%A3o Google Scholar
Ministério da Saúde (2024, May 30). Atualização de Casos de Arboviroses. https://www.gov.br/saude/pt-br/assuntos/saude-de-a-a-z/a/aedes-aegypti/monitoramento-das-arboviroses Google Scholar
Moaz, S., Berryman, K., Winget, J. R., Tindale, R. S., & Ottati, V. (2023). The Role of Group Context in Open-Minded Cognition. In Ottati, V. & Stern, C. (Eds.), Divided. Oxford University Press. https://doi.org/10.1093/oso/9780197655467.003.0008 Google Scholar
Muthen, B., & Kaplan, D. (1985). A comparison of some methodologies for the factor analysis of non-normal Likert variables. British Journal of Mathematical and Statistical Psychology, 38, 171189. https://doi.org/10.1111/j.2044-8317.1985.tb00832.x CrossRefGoogle Scholar
Muthukrishna, M., Bell, A. V., Henrich, J., Curtin, C. M., Gedranovich, A., McInerney, J., & Thue, B. (2020). Beyond Western, educated, industrial, rich, and democratic (WEIRD) psychology: Measuring and mapping scales of cultural and psychological distance. Psychological Science, 31(6), 678701. https://doi.org/10.1177/0956797620916782 CrossRefGoogle ScholarPubMed
Newton, C. A. (2020). Deliberately thinking about analytic thinking style: developing the comprehensive thinking style questionnaire. (Publication No. 28859629) [Master Thesis, University of Regina]. ProQuest Dissertations & Theses Global.Google Scholar
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175220. https://doi.org/10.1037/1089-2680.2.2.175 CrossRefGoogle Scholar
Novaes, F. C., Bienemann, B., Paveltchuk, F. D. O., Siqueira, P. H. T., & Damásio, B. F. (2019). Development and psychometric properties of the attitude toward science scale. Psico-USF, 24, 763777. https://doi.org/10.1590/1413-82712019240413 CrossRefGoogle Scholar
Ottati, V., & Wilson, C. (2018, March 28). Open-minded cognition and political thought. Oxford Research Encyclopedia of Politics. Retrieved 13 Jun. 2024, from https://oxfordre.com/politics/view/10.1093/acrefore/9780190228637.001.0001/acrefore-9780190228637-e-143 CrossRefGoogle Scholar
Pennycook, G., Bago, B., & McPhetres, J. (2023). Science beliefs, political ideology, and cognitive sophistication. Journal of Experimental Psychology: General, 152(1), 8097. https://doi.org/10.1037/xge0001267 CrossRefGoogle ScholarPubMed
Pennycook, G., Cheyne, J. A., Koehler, D. J., & Fugelsang, J. A. (2020). On the belief that beliefs should change according to evidence: Implications for conspiratorial, moral, paranormal, political, religious, and science beliefs. Judgment and Decision Making, 15, 476498. https://doi.org/10.1017/S1930297500007439 CrossRefGoogle Scholar
Perkins, D. (2019). Learning to reason: The influence of instruction, prompts and scaffolding, metacognitive knowledge, and general intelligence on informal reasoning about everyday social and political issues. Judgment and Decision Making, 14(6), 624643. https://doi.org/10.1017/S1930297500005350 CrossRefGoogle Scholar
Perkins, D., Bushey, B., & Faraday, M. (1986). Learning to reason. Final report, Grant No. NIE-G-83-0028, Project No. 030717. Harvard Graduate School of Education. http://finzi.psych.upenn.edu/baron/aot/perkins1986.pdf Google Scholar
Price, E., Ottati, V., Wilson, C., & Kim, S. (2015). Open-minded cognition. Personality & Social Psychology Bulletin, 41, 14881504. https://doi.org/10.1177/0146167215600528 CrossRefGoogle ScholarPubMed
Pronin, E., Lin, D. Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28(3), 369381. https://doi.org/10.1177/0146167202286008 CrossRefGoogle Scholar
Rad, M. S., Martingano, A. J., & Ginges, J. (2018). Toward a psychology of Homo sapiens: Making psychological science more representative of the human population. Proceedings of the National Academy of Sciences of the United States of America, 115(45), 1140111405. https://doi.org/10.1073/pnas.1721165115 CrossRefGoogle Scholar
Reips, U. D., & Funke, F. (2008). Interval level measurement with visual analogue scales in Internet-based research: VAS Generator. Behavior Research Methods, 40(3), 699704. https://doi.org/10.5167/uzh-4565 CrossRefGoogle ScholarPubMed
Rezende, A. T., Gouveia, V. V., Coelho, G. L. de H., Freires, L. A., Loureto, G. D. L., & Cavalcanti, T. M. (2021). Escala de crenças gerais conspiratórias (ECGC): desenvolvimento e evidências psicométricas. Avaliação Psicológica, 20(2), 127138. https://doi.org/10.15689/ap.2021.2002.16198.01 Google Scholar
Roster, C. A., Lucianetti, L., & Albaum, G. (2015). Exploring slider vs. categorical response formats in web-based surveys. Journal of Research Practice, 11(1). https://jrp.icaap.org/index.php/jrp/article/view/509.html Google Scholar
Rutjens, B. T., Sutton, R. M., & van der Lee, R. (2018). Not all skepticism is equal: Exploring the ideological antecedents of science acceptance and rejection. Personality and Social Psychology Bulletin, 44(3), 384405. https://doi.org/10.1177/0146167217741314 CrossRefGoogle ScholarPubMed
Stanovich, K. E., & Toplak, M. E. (2019). The need for intellectual diversity in psychological science: Our own studies of actively open-minded thinking as a case study. Cognition, 187, 156166. https://doi.org/10.1016/j.cognition.2019.03.006 CrossRefGoogle ScholarPubMed
Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology, 89(2), 342357. https://doi.org/10.1037/0022-0663.89.2.342 CrossRefGoogle Scholar
Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General, 127, 161188. https://doi.org/10.1037/0096-3445.127.2.161 CrossRefGoogle Scholar
Stanovich, K. E., & West, R. F. (2007). Natural myside bias is independent of cognitive ability. Thinking & Reasoning, 13(3), 225247. https://doi.org/10.1080/13546780600780796 CrossRefGoogle Scholar
Stanovich, K. E., West, R. F., & Toplak, M. E. (2013). Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science, 22(4), 259264. https://doi.org/10.1177/0963721413480174 CrossRefGoogle Scholar
Steenkamp, J. E. M., & Baumgartner, H. (1998). Assessing measurement invariance in cross-national consumer research. Journal of Consumer Research, 25(1), 7890, https://doi.org/10.1086/209528 CrossRefGoogle Scholar
Svedholm, A. M., & Lindeman, M. (2013). The separate roles of the reflective mind and involuntary inhibitory control in gatekeeping paranormal beliefs and the underlying intuitive confusions. British Journal of Psychology, 104(3), 303319. https://doi.org/10.1111/j.2044-8295.2012.02118.x CrossRefGoogle ScholarPubMed
Svedholm, A. M., & Lindeman, M. (2017). Actively open-minded thinking: Development of a shortened scale and disentangling attitudes toward knowledge and people. Thinking and Reasoning, 24(1), 2140. https://doi.org/10.1080/13546783.2017.1378723 CrossRefGoogle Scholar
Swami, V., Voracek, M., Stieger, S., Tran, U. S., & Furnham, A. (2014). Analytic thinking reduces belief in conspiracy theories. Cognition, 133(3), 572585. https://doi.org/10.1016/j.cognition.2014.08.006 CrossRefGoogle ScholarPubMed
Thielmann, I., Akrami, N., Babarović, T., Belloch, A., Bergh, R., Chirumbolo, A., Čolović, P., De Vries, R. E., Dostál, D., Egorova, M., Gnisci, A., Heydasch, T., Hilbig, B. E., Hsu, K., Izdebski, P., Leone, L., Marcus, B., Međedović, J., Nagy, J., … Lee, K. (2019). The HEXACO–100 across 16 languages: A large-scale test of measurement invariance. Journal of Personality Assessment, 102(5), 714726. https://doi.org/10.1080/00223891.2019.1614011 CrossRefGoogle ScholarPubMed
Timmerman, M. E., & Lorenzo-Seva, U. (2011). Dimensionality assessment of ordered polytomous items with Parallel Analysis. Psychological Methods, 16, 209220. https://doi.org/10.1037/a0023353 CrossRefGoogle ScholarPubMed
Trindade, P. (2024, May 14). Número de moradores fora de casa após temporais no RS é superior à população de oito capitais no Brasil. G1. https://g1.globo.com/rs/rio-grande-do-sul/noticia/2024/05/14/temporais-moradores-fora-de-casa-x-capitais-brasileiras.ghtml Google Scholar
Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 469. doi:10.1177/109442810031002 CrossRefGoogle Scholar
Van de Schoot, R., Lugtig, P., & Hox, J. (2012). A checklist for testing measurement invariance. European Journal of Developmental Psychology, 9(4), 486492. doi:10.1080/17405629.2012.686740 CrossRefGoogle Scholar
Van de Vijver, F. J. R. (1998). Towards a theory of bias and equivalence. Zuma Nachrichten, 3, 4165.Google Scholar
Voutilainen, A., Pitkäaho, T., Kvist, T., & Vehviläinen-Julkunen, K. (2016). How to ask about patient satisfaction? The visual analogue scale is less vulnerable to confounding factors and ceiling effect than a symmetric Likert scale. Journal of advanced nursing, 72(4), 946957. https://doi.org/10.1111/jan.12875 CrossRefGoogle ScholarPubMed
Figure 0

Table 1 Exploratory factor analysis of the original AOT-E

Figure 1

Table 2 Exploratory factor analysis of the Brazilian Portuguese version of the original AOT-E

Figure 2

Table 3 CFA goodness-of-fit indices for the Brazilian Portuguese version of the AOT-E

Figure 3

Table 4 Multigroup CFA goodness-of-fit indices

Figure 4

Table 5 Polychoric correlation coefficients

Supplementary material: File

Bonafé-Pontes et al. supplementary material

Bonafé-Pontes et al. supplementary material
Download Bonafé-Pontes et al. supplementary material(File)
File 10 KB