The extensive use of digital platforms and services has expanded widely the range of potential attacks and targets, rendering individuals and the democratic values and institutions vulnerable to a substantial number of cyber-enabled threats. These threats can be sophisticated, conducted on a large scale, and capable of producing significant, viral consequences. Among these threats, cyber disinformation is regarded as a major threat. The phenomenon is widespread and complex, in certain cases, part of hybrid warfare, involving various cyberattacks by nefarious actors, which deceptively distribute fake or incomplete materials, with a view to influencing people’s opinions or behavior.
Disinformation can involve numerous vectors and take several forms. The goal of disinformation campaigns is to promote or sustain certain economic or political interests, discrimination, phobia, hate speech, or harass individuals (European Parliament, 2022). Instances of alleged disinformation can be encountered with respect to a large variety of aspects, such as food (Diekman, Ryan, & Oliver, Reference Diekman, Ryan and Oliver2023); migrants (Culloty et al., Reference Culloty, Suiter, Viriri and Creta2022); fossil fuel;Footnote 1 sexual preferences (Carratalá, Reference Carratalá2023); health hazards;Footnote 2 politics;Footnote 3 and so on.
Successful disinformation campaigns can negatively affect fundamental freedoms, undermine trust, subvert attention, change attitudes, sow confusion, exacerbate divides, or interfere with decision-making processes. Consequently, such campaigns can rightly be considered attacks on knowledge integrity (Pérez-Escolar, Lilleker, & Tapia-Frade, Reference Pérez-Escolar, Lilleker and Tapia-Frade2023, p. 77). The potential consequences can be disquieting, negatively affecting democratic values and institutions (Jungherr & Schroeder, Reference Jungherr and Schroeder2021; Schünemann, Reference Schünemann, Cavelty and Wenger2022). The concerns over cyber disinformation are notable worldwide and received significant attention from researchers (Buchanan & Benson, Reference Buchanan and Benson2019; Nenadić, Reference Nenadić2019; Olan et al., Reference Olan, Jayawickrama, Arakpogun, Suklan and Liu2022; Pierri, Artoni, & Ceri, Reference Pierri, Artoni and Ceri2020; Tenove & Tworek, Reference Tenove and Tworek2019; Ternovski, Kalla, & Aronow, Reference Ternovski, Kalla and Aronow2022; Vaccari & Chadwick, Reference Vaccari and Chadwick2020; Weikmann & Lecheler, Reference Weikmann and Lecheler2022).
While there are laws that address the phenomenon (e.g., 18 U.S. Code § 35, the German Network Enforcement Act, the French Law on the fight against the manipulation of information), strengthened codes of practice (e.g., the European Commission’s Strengthened Code of Practice on Disinformation 2022), assignment of anti-disinformation attributions to governmental agencies (e.g., the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency), awareness campaigns, and implementation of disinformation detection and blocking algorithms or filters, the control of the phenomenon still poses significant challenges.
The control of the cyber disinformation phenomenon plays a significant role in the protection of democratic values and systems. This chapter argues that an essential role in the control of the phenomenon is played by the individual behavior of users and aims to identify factors that impact on Behavioral Intentions (BIs) and Cyber Hygiene Behavior (CHB), in the circumstances of cyber disinformation. The chapter integrates the Extended Theory of Planned Behavior (ETPB) and a Structural Equation Model. The research data were collected using a questionnaire. The model’s parameters were processed using the SmartPLS software.
The rest of this chapter is organized as follows. The next section outlines the phenomenon’s main attributes and explains how cyber-enabled means can threaten democratic values and institutions. The third section discusses aspects regarding structural equation modeling (SEM), applied to disinformation. The fourth section presents the conceptual model and the proposed hypotheses. Finally, the fifth section presents the model evaluation. The chapter concludes with implications of findings.
Cyber Disinformation Attributes
“Disinformation” is a term difficult to define because the phenomenon is complex (Ó Fathaigh, Helberger, & Appelman, Reference Ó Fathaigh, Helberger and Appelman2021) and covers many forms, such as “fabrications, fakeness, falsity, lies, deception, misinformation, disinformation, propaganda, conspiracy theory, satire or just anything with which one disagrees” (Andersen & Søe, Reference Andersen and Søe2020, p. 6). Wardle and Derakhshan (Reference Wardle and Derakhshan2017), for instance, contrast “disinformation,” referred to as intentionally false or deceptive communication, with “misinformation,” understood as communications that may contain false claims, however, not intended to cause or inflict harm. The European Commission (Reference von der Leyen2020, p. 18) clearly distinguishes between misinformation, information influence operation, foreign interference in the information space, and disinformation, defining the latter as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm.”
Disinformation can be orchestrated by individuals or by organized groups (state or non-state) and involve various sources, such as regular citizens, political leaders or officials, attention-seeking trolls, profiteers, or propagandistic media (Watson, Reference Watson2021). Several factors were identified that favor the phenomenon, such as the tendency to believe unreliable statements (European Parliamentary Research Service, 2020) or people’s difficulties in identifying disinformation (Machete & Turpin, Reference Machete, Turpin, Hattingh, Matthee, Smuts, Pappas, 114Dwivedi and Mantymaki2020), the identity confirming problems, and deficiencies in platform filtering (Krafft & Donovan, Reference Krafft and Donovan2020).
According to Bontcheva et al. (Reference Bontcheva, Posetti, Teyssou, Meyer, Gregory, Hanot and Maynard2020, pp. 22–23), disinformation can take various formats, such as false claims or textual narratives; altered, fake, or decontextualized audio and/or video; and fake websites and manipulated datasets. Cyber disinformation campaigns can involve, for example, deceptive advertising, propaganda, or the dissemination of forged materials, such as videos, photographs, audios, documents (including, for instance, fake web pages or maps), created through image altering or airbrushing or cover-up, or by audio camouflage.
One of the characteristics of disinformation campaigns regards the existence of deceptive goals. According to Fallis (Reference Fallis, Floridi and Illari2014, pp. 142–146), the deceptive goals can be classified as regarding the accuracy of the content, the source believing the content, the content source identity, and the content accuracy implication. Disinformation can cause significant harm, as it has the real potential to confuse or manipulate people, suppress the truth or critical voices, generate distrust in democratic institutions or norms, and even disrupt democratic processes (Bontcheva et al., Reference Bontcheva, Posetti, Teyssou, Meyer, Gregory, Hanot and Maynard2020).
Social media is often regarded as a highly effective vector to promote political goals via disinformation campaigns (Aïmeur, Amri, & Brassard, Reference Aïmeur, Amri and Brassard2023). Twitter, now called “X,” for example, was used to spread misleading election memes and graphics, designed to demoralize opponent’s voters, and even deter them from exercising their right to vote, which went viral.Footnote 4 For an illustration of the importance attached to X as a disinformation vector, according to Statista research, the number of disinformation and pro-Russian posts in Poland via X amounted, in January 2022, to 25,910 disinformation and pro-Russian posts on X, increasing to 358,945 during the year (Statista, 2023).
In practice, disinformation campaigns employ an impressive array of tactics, including the impersonation of organizations or real people; creation of fake or misleading online personas or websites; creation of deepfakes or synthetic media; devise or amplification of certain theories; astroturfing and flooding; exploitation of gaps in information; manipulation of unsuspecting people; and spread of targeted content (Cybersecurity and Infrastructure Security Agency, 2022). Of particular concern, given their massive disinformation potential, are the deepfakes. In video and/or audio form, deepfakes nowadays are very realistic, allowing morphing attacks and the creation of unreal faces or voices, allowing personalized messages to individuals. Deepfakes can negatively affect the credibility of individuals, disrupt markets, facilitate frauds, manipulate public opinions, incite people to various forms of violence, and support extremist narratives, social unrest, or political polarization (Mattioli et al., Reference Mattioli, Malatras, Hunter, Biasibetti Penso, Bertram and Neubert2023; Trend Micro, 2020). Moreover, deepfakes undermine conversations about reality and can disrupt democratic politics (Chesney & Citron, Reference Chesney and Citron2019).
Personalization algorithms can be employed to facilitate the spreading of disinformation, potentially making it thrive on digital platforms (Borges & Gambarato, Reference Borges and Gambarato2019). The techniques or means employed in disinformation campaigns may also include, for instance, bots. These are increasingly difficult to distinguish from humans and can be effectively used to produce disinformation content, targeted at predetermined or general users (Edwards et al., Reference Edwards, Beattie, Edwards and Spence2016). For instance, bots are used to disseminate election disinformation (Knight Foundation, 2018) or disinformation regarding health issues (Benson, Reference Benson2020).
Artificial neural network (ANN) and deep learning methods can be used in disinformation campaigns, with unlawful or nefarious potential (Rana et al., Reference Rana, Nobi, Murali and Sung2022). Amplifiers, such as influential people or artificial intelligence tools, via, for instance, cross-platform coordination or the manipulation of engagement metrics, can be used to maximize engagement or the spread of disinformation through networks, to retweet or to follow X accounts or to share Facebook, now called Meta, posts (European Commission, 2023; Michaelis, Jafarian, & Biswas, Reference Michaelis, Jafarian and Biswas2022). Clickbait is another disinformation method used to attract online users to click on disinformation links (Collins et al., Reference Collins, Hoang, Nguyen and Hwang2021).
Structural Equation Modeling of Disinformation
The original approach to SEM assumed that the use of technical systems could be explained and predicted by user motivation directly influenced by external factors (i.e., the functionalities and capabilities of those technical systems) (Chuttur, Reference Chuttur2009; Davis, Reference Davis1985). The Technology Acceptance Model (TAM) theory was proposed to explore behavior and user acceptance of Information Communication Technology (ICT) based on the social psychology perspective. The TAM theory assumes that two factors determine users’ acceptance of a technology: (1) perceived usefulness and (2) perceived ease of use. The first refers to the user’s belief in the degree to which they think that using the technology enhances their job performance, productivity, or overall effectiveness. The second represents the user’s perception of how easy it is to apply the technology. In general, the acceptance of technology is a critical element and a necessary condition in the implementation of ICT in everyday life. An extensive literature survey was conducted utilizing the Scopus database, which covers 18,639 papers on TAM, published between 1964 and 2023.
Over decades, several theoretical models have been developed to understand the acceptance and the use of ICT. Researchers have been hesitant in selecting the appropriate theoretical model for the evaluation of the acceptance and usage of ICT. Recognition of the ICT needs and the ICT acceptance by individuals in business organizations is usually the beginning stage of any business activity and this understanding can be helpful to find the way of future implementation of the ICT.
In general, TAM models are estimated through the SEM, which is an approach for testing the hypotheses on relations among observable and latent variables (Sabi et al., Reference Sabi, Uzoka, Langmia and Njeh2016). The SEM is a statistical method applied in various fields of social sciences to estimate relationships among specified variables and verification of hypotheses if those relations are statistically dependable and valid. In this study, the SEM is realized through the Partial Least Square–Structural Equation Modeling (PLS_SEM), which represents the composite-based SEM method (Hair Jr. et al., Reference Hair, Hult, Ringle and Sarstedt2017). Partial Least Square (PLS) is a statistical method for estimation of relationships between independent variables and dependent variables.
For the past thirty years, the research community has been strongly involved in the identification of the factors that have an impact on technology acceptance. The theory of reasoned action (TRA) and the theory of planned behavior (TPB) were predecessors of the TAM (Marikyan & Papagiannidis, Reference Marikyan and Papagiannidis2021; Park & Park, Reference Park and Park2020). The TRA explains and predicts human behaviors considering their attitudes and Subjective Norms (SN). That theory assumes that individuals make rational decisions based on their attitudes and social norms.
The TPB is an extension of the TRA, also provided by Ajzen (Reference Ajzen2005). The TPB explains and predicts individual behavior based on human intentions, which are dependent on three factors, that is, attitude identified with personal beliefs, SN referring to the social pressure, and Perceived Behavioral Control (PBC) encompassing self-efficacy, perceived obstacles, and facilitators. The TAM had further modifications (i.e., TAM2, TAM3); however, researchers have utilized the Unified Theory of Acceptance and Use of Technology (UTAUT) model, which suggests that the actual use of technology is determined by BI. The perceived BI is dependent on four key constructs: performance expectancy, effort expectancy, social influence, and facilitating conditions. The effect of variables is moderated by age, gender, experience, and voluntariness of use (Venkatesh et al., Reference Venkatesh, Morris, Davis and Davis2003). Further, researchers have noticed the importance of factors reflecting the costs and benefits of behavior, as well as the context of use.
Venkatesh, Thong, and Xu (Reference Venkatesh, Thong and Xu2012) proposed the UTAUT2 model, which was developed to examine technology use in organizational settings. The authors of the UTAUT2 argue that the use of technology by individuals is determined by the following constructs: performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, price value, and habit, moderated by age, gender, and experience. In the UTAUT and UTAU2 models, BI has an impact on use behavior.
This study aims to identify factors having an impact on Behavioral Intentions (BIs) and Cyber Hygiene Behavior (CHB) in the circumstances of wide dissemination of disinformation in cyberspace. The BIs comprise an individual’s predispositions and willingness to behave in a specific way. This concept is included in various human behavior theories to analyze and predict human actions. CHB covers all practices that allow avoiding the risk to be a victim of cyberattacks and reduce cyber vulnerabilities.
This study considered state-of-the-art research work on disinformation attitudes. The Scopus literature survey revealed 4,526 publications on the “disinformation” keyword search. These papers were published between 1974 and 2023. However, there are only nineteen publications on the “disinformation” AND “structural equation modelling,” query, all between 2020 and 2023. Certain countries (i.e., China, Russia, and Turkey) have professionalized operations online to support social media campaigns and create an alternative informational space and effectively disseminate persuasive messages through symbols and emotions. Therefore, their action monitoring as well as online trolling are subjects of research (Alieva, Moffitt, & Carley, Reference Al-Shanfari, Yassin, Tabook, Ismail and Ismail2022; Uyheng, Moffitt, & Carley, Reference Uyheng, Moffitt and Carley2022).
Several researchers examine the phenomenon of disinformation as a threat in the sphere of cybersecurity (Caramancion et al., Reference Caramancion, Li, Dubois and Jung2022; Carrapico & Farrand, Reference Carrapico and Farrand2021). Hence, cybersecurity issues have been included as observable variables (i.e., Security (SEC) items) in the proposed survey. Arayankalam and Krishnan (Reference Arayankalam and Krishnan2021) formulated some hypotheses, which were positively verified, concerning disinformation, as follows:
Disinformation through social media is positively associated with online media development.
Online media are positively associated with its social media-induced offline violence.
The government control negatively moderates disinformation.
In social psychology, the TPB is one of the most influential behavioral models. The TPB links beliefs to behavior and assumes that the user’s behavior is determined by their intentions to perform that behavior. In the conceptual model proposed by Jia et al. (Reference Jia, Yu, Feng, Ning, Cao, Shang, Gao and Yu2022), for example, several factors have an impact on Behavioral Attitudes (BA), Subjective Norm (SN), and Perceived Behavioral Control (PBC). Further, the three variables determine BI. Moreover, Shirahada and Zhang (Reference Shirahada and Zhang2022) argue that TPB is used to predict and explain human intentions in a particular context. Intentions are influenced by SN, attitudes, and PBC. SN concern the expectations of other people and social pressures regarding desirable behavior. Attitude refers to evaluations of behavior, and PBC refers to the ease of performing a behavior.
Maheri, Rezapour, and Didarloo (Reference Maheri, Rezapour and Didarloo2022) argue that the Perceived Risk (PR) refers to subjective assessments of the risk of disinformation and its potential consequences. SN refer to respondents’ beliefs that significant others think they should or should not engage in a particular behavior. PBC concerns participants’ perceptions of their ability to do verification.
Romero-Galisteo et al. (Reference Romero-Galisteo, Gonzalez-Sanches, Galvez-Ruiz, Palomo-Carrion, Casuso-Holgado and Pinero-Pinto2022) consider that TPB explains the degree of correlation between variables, that is, entrepreneurial intention, perceived feasibility, and perceived desirability. Beyond that, Savari, Mombeni, and Izadi (Reference Savari, Mombeni and Izadi2022) develop an Extended Theory of Planned Behavior (ETPB) including the variables, that is, Descriptive Norms (DN), Moral Norms (MN), Habits (HA), and Justification (JU). In their theoretical framework, Attitude, SN, DN, and PBC have impacts on Intention, while PBC, MN, and variables such as HA and JU influence Behavior.
MN define a sense of inherent moral commitment, according to a value system. The concept of DN explains personal attitude about how much other people exhibit a certain behavior. The norms are introduced as people learn not only from their own experiences but also from analyses of behaviors of others. Comparable extension of TPB was provided by Khani Jeihooni et al. (Reference Khani Jeihooni, Layeghiasl, Yari and Rakhshani2022), who included attitude, among others, PBC, SN, and BIs in their survey.
According to Ababneh, Ahmed, and Dedousis (Reference Ababneh, Ahmed and Dedousis2022), the TPB proposes four major predictors of human behavior (i.e., attitude toward the behavior, SN, BI, and PBC). Ajzen (Reference Ajzen1991, p. 183) argues that attitude, norms, and control determine intentions and behavior. Cabeza-Ramirez et al. (Reference Cabeza-Ramirez, Sanchez-Canizares, Santos-Roldan and Fuentes-Garcia2022) notice that literature has rarely considered the possible perception of risk associated with desirable behavior. Similarly, the risk is included in the TPB model proposed by Zhang et al. (Reference Zhang, Shi, Chen and Zhang2022). Security is considered in the TPB model developed by Al-Shanfari et al. (Reference Al-Shanfari, Yassin, Tabook, Ismail and Ismail2022). Using the SEM method, they revealed factors having an impact on information security behavior adoption and employees’ training.
The Conceptual Model and Hypotheses
Considering the literature survey on latent variables included in the TPB models, this study noticed that there is no standardized approach: The models are formulated according to the preferences of researchers and some extensions are possible. Therefore, this study focuses on the application of the ETPB model; however, additional constructs are added, which are expected to present the context of behavior.
This study defines the BA as the degree to which a person believes that they can properly recognize disinformation. The BA influences the decision on whether to accept or reject the information. The BA reveals the extent to which a person believes that the use of information is needed and not harmful. BA refers to personal predispositions to act in a specific way, regarding a particular object, person, situation, concept, or technology.
Beyond that, this study proposes to include three types of norms, that is, MN, SN, and DN. MN result from personal internal beliefs not to tolerate disinformation. In general, MN are principles or rules that govern human behavior and establishing what is right or wrong in a social community. SN concern a personal perception of social pressure or influence to perform or not perform a particular action. SN result from personal motivation, normative beliefs, individuals’ knowledge, and impacts and experiences of third-party people, who may have influence on the questionnaire recipient. DN concern the perceptions that individuals have about behaviors exhibited by others in a community.
In this study, DN reveal the degree to which the recipient creates themselves as the image of a person who knows how to avoid disinformation. PBC means personal beliefs that an individual has capabilities, that is, competencies and resources, to control factors that may influence their behavior. In this study, PBC refers to the degree to which the recipient believes in their abilities to self-control and avoid disinformation. Therefore, this study proposes the following hypotheses:
H5: Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI).
H6: Perceived Behavioral Control (PBC) has a positive impact on Cyber Hygiene Behavior (CHB).
Beyond variables considered in the TPB model, this study added other variables. Two of them, that is, HA and JU, have been introduced to the ETPB model by Savari, Mombeni, and Izadi (Reference Savari, Mombeni and Izadi2022). HA are repetitive actions, which are performed regularly or automatically in human lives. They can be positive (i.e., good habits, e.g., teeth cleaning) and negative (i.e., unhealthy habits, e.g., avoiding physical activities). In this paper, HA includes individual practices and routines applied by the recipients, particularly avoiding internet news. JU refers to collecting and revealing the reasons for a particular action, decision, or belief. JU means a personal explanation of regulations, policies, and administrative practices to avoid disinformation.
This study also considered the impact of variables combined with security, Anxiety (AN), and risk. Hence, the conceptual model covers the impact of three additional factors that may influence CHB, which covers practices and habits to maintain an elevated level of cybersecurity and protection of digital assets. It may also include prevention to maintain mental health and avoiding unreliable and untested, unchecked, and malicious information. Cyber AN is a degree to which a person hesitates to use internet information because of its harmfulness. PR is defined as a degree of risk recognition by an individual. SEC means level of knowledge on Information Technology (IT) tools to protect in case of a human agent or software attack. Hence, the next hypotheses are as follows:
H8: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).
H9: Habits (HA) have an impact on Cyber Hygiene Behavior (CHB).
H10: Perceived Risks (PRs) have an impact on Cyber Hygiene Behavior (CHB).
H11: Security (SEC) has an impact on Cyber Hygiene Behavior (CHB).
H12: Anxiety (AN) has an impact on Cyber Hygiene Behavior (CHB).
Figure 5.1 includes the conceptual model of variables having an impact on CHB. In this theoretical framework, relationships among constructs, that is, latent variables, as well as between constructs and their assigned indicators, that is, items or observable variables, are shown with arrows.

Figure 5.1 Conceptual model.
Observable Indicators for Cyber Hygiene Behavior Model
For the past thirty years, ICT, in general, and the internet have played a significant role in communications among people in all sectors of life (i.e., education, administration, business, health care, and agriculture). The benefits of ICT do overcome risks and wastes caused by disinformation. To evaluate young peoples’ behavior and recognize factors having an impact on their BIs and actions, the TPB model has been specified and estimated. The literature survey on TAM, UTAUT, and the TPB models led to observations that researchers focus on the latent variable’s identification. However, the specification of observable items, such as indicators, should also be discussed.
Considering the items identified in literature and proposed by other researchers, this study items are included in Table 5.1.

Table 5.1Table 5.1aLong description
A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The latent variables are anxiety, perceived risk, security, moral norms, behavioral attitude, subjective norms, descriptive norms, perceived behavioral control, behavioral intention, habits, justification, and cyber hygiene behavior. There are four columns namely, latent variables, item, mean R O, and mean P L. Each latent variable has a varying number of subcategories. The following are the row-wise details for the first five variables with their respective data categories in the columns from left to right.
Anxiety, A N, A N 1: I feel apprehensive about finding fake news on the internet, 4.115, 3.245.
Anxiety, A N, A N 2: I hesitate to use social media for fear of finding fake news, 2.285, 1.685.
Anxiety, A N, A N 3: Fake news are threats to democratic values and democratic institutions, 5.394, 4.925.
Perceived Risk, P R, P R 1: Buying products promoted by an unreliable source adds to the uncertainty about the results, 5.782, 5.780.
Perceived Risk, P R, P R 2: Disinformation destroys a positive image and reputation, 5.842, 5.890.
Perceived Risk, P R, P R 3: I accept the risk to enable learning from uncertain sources, 3.194, 5.080.
Perceived Risk, P R, P R 4: I think there is no risk in using social media to meet new people, 2.291, 2.900.
Security, S E C, S E C 1: Anti-spamming software allows me to avoid fake news, 3.982, 4.070.
Security, S E C, S E C 2: Internet service provider warns me about fake news, 2.327, 3.500.
Security, S E C, S E C 3: I pay consideration to website artifacts, i.e., Padlock or https, 4.024, 4.755.
Moral Norms, M N, M N 1: Avoiding fake news dissemination is a matter of conscience for me, 5.600, 4.040.
Moral Norms, M N, M N 2: I feel compelled by my conscience to punish fake news providers, 4.279, 5.040.
Moral Norms, M N, M N 3: I feel uncomfortable when I observe that other people tolerate fake news dissemination, 5.497, 4.660.
Moral Norms, M N, M N 4: I feel responsible for the true information inserted by me on the internet, 6.115, 5.110.
Behavioral Attitude, B A, B A 1: I like to be engaged in the activity for fake news recognition, 4.121, 2.830.
Behavioral Attitude, B A, B A 2: I believe that constant monitoring of COVID-19 news has a positive impact on my mental health, 2.521, 3.410.
Behavioral Attitude, B A, B A 3: I have enough responsibility not to read fake news, 5.370, 4.995.
Behavioral Attitude, B A, B A 4: I think it is better to verify the information provenance, 6.467, 5.825.
Behavioral Attitude, B A, B A 5: I think that unreliable source of data may provide fake news, 5.697, 4.660.
Behavioral Attitude, B A, B A 6: I think that losers and crazy people provide fake news on the internet, 3.618, 3.940.
Behavioral Attitude, B A, B A 7: I think fake news is like a joke, 2.455, 3.685.

Table 5.1Table 5.1bLong description
Table 5.1 continues with the respective data entries of items, mean R O, and mean P L from left to right, for the following five latent variables:
Subjective Norms, S N, S N 1: Some of my colleagues have been deceived by fake news, 5.091, 4.620.
Subjective Norms, S N, S N 2: Public opinion will affect my choice of the internet news, 3.393, 4.300.
Subjective Norms, S N, S N 3: People whom I work with help each other to recognize fake news, 4.327, 4.360.
Subjective Norms, S N, S N 4: People whom I trust warn me and explain to me the fake news, 5.164, 4.930.
Descriptive Norms, D N, D N 1: I think most of my friends know how to avoid fake news, 4.709, 4.875.
Descriptive Norms, D N, D N 2: I am sure that people around me do not read unreliable news, 3.273, 3.895.
Descriptive Norms, D N, D N 3: I believe that most of my family thinks that reading unreliable news is unreasonable and wrong, 4.685, 5.040.
Descriptive Norms, D N, D N 4: Reading fake news is disgusting to the people around me, 4.006, 4.120.
Perceived Behavioral Control, P B C, P B C 1: My technical ability is sufficient to avoid disinformation, 5.079, 5.270.
Perceived Behavioral Control, P B C, P B C 2: I purposefully avoid nonverified information, 5.364, 5.005.
Perceived Behavioral Control, P B C, P B C 3: I know how to avoid fake news, 5.267, 5.335.
Perceived Behavioral Control, P B C, P B C 4: I think I have good self-control, 5.818, 5.175.
Behavioral Intention, B I, B I 1: I would like to know more about the possibilities of verifying internet information, 6.236, 4.945.
Behavioral Intention, B I, B I 2: I will recommend my friends or relatives to verify information from uncertain or unknown sources, 6.073, 4.890.
Behavioral Intention, B I, B I 3: Post COVID-19, I carefully check information on it, 5.442, 4.435.
Behavioral Intention, B I, B I 4: I will take good care of myself, particularly when I am browsing unsafe portals, 5.933, 5.525.
Behavioral Intention, B I, B I 5: I am still looking for news that allows me to verify the information received earlier, 5.418, 4.655.
Habits, H A, H A 1: I do not think about the fake news on the internet because I do not read internet news, 2.539, 3.450.
Habits, H A, H A 2: I habitually always pay attention to reliability of news and always check the source of information, 5.273, 4.910.
Habits, H A, H A 3: I always read reliable information on the internet because it has become a habit for me, 4.836, 4.570.

Table 5.1Table 5.1cLong description
Table 5.1 continues with the respective data entries of items, mean R O, and mean P L from left to right, for the remaining two latent variables:
Justification, J U, J U 1: Due to the fake news dissemination, people do not trust each other and the internet is not a reliable source of information, 4.685, 4.855.
Justification, J U, J U 2: Governmental activities to punish and reduce fake news are small and hard to notice, 5.685, 5.050.
Justification, J U, J U 3: The habit of reducing fake news on the internet is usually forgotten when people need to receive important information, example on COVID-19 risks, 5.333, 4.550.
Justification, J U, J U 4: Increasing the punishment for fake news is often overlooked because there is so much everyday news and people do not remember nor recognize what is false or true, 5.497, 4.985.
Cyber Hygiene Behavior, C H B, C H B 1: I avoid constantly studying the news on gossip portals, 5.358, 5.135.
Cyber Hygiene Behavior, C H B, C H B 2: I will not encourage others to study the gossip portal news, 5.430, 5.545.
Cyber Hygiene Behavior, C H B, C H B 3: I immediately remove emails from unknown senders, 4.873, 4.610.
Cyber Hygiene Behavior, C H B, C H B 4: I do not click on links or attachments from uncollected emails or texts, 6.515, 5.970.
The research data were collected using a questionnaire and analyzed using SEM. The survey respondents were students at the University of Economics in Katowice (Poland) and the Babeş-Bolyai University (Romania). The questionnaires were distributed to bachelor, master, and doctoral-level students. The responses to the questionnaire were voluntary and anonymized. This research collected 200 questionnaires from the University of Economics in Katowice and 165 questionnaires from the Babeş-Bolyai University.
The students were asked to express their degree of agreement or disagreement with the statements in Table 5.1 by marking the answers on the seven-grade Likert scale, considering the following meanings: 1 – absolutely disagree; 2 – disagree; 3 – rather disagree; 4 – irrelevant; 5 – rather agree; 6 – agree; and 7 – definitely agree.
Table 5.1 contains the items included in the survey and presents a list of questions with acronyms and a set of latent variables. The last column in Table 5.1 includes average (Mean) values of these research indicators. The Pearson correlation rate for the two last columns in Table 5.1 is 0.7854; hence, authors conclude on high comparability of responses of recipients from these two populations under research.
The TPB Model Evaluation
The presented conceptual model (see Figure 5.1) consists just of items connected to the variables. SmartPLS3 was used to calculate the model (Ringle, Hair, & Sarstedt, Reference Ringle, Hair and Sarstedt2014). In the first run, the model was calculated with the PLS algorithm. The number of iterations was set to 1,000 and the stop criterion to 10−X with selected 7. Then the model was calculated with the Bootstrap algorithm, in which the number of samples was set to 5,000 for the full version with bias-corrected and accelerated in two-tailed distribution. The significant level was equal to 0.05.
The conceptual model (Figure 5.1) was estimated twice. Firstly, data were gathered in Poland, then in Romania. The reliability of the variables was evaluated using Cronbach’s Alpha and Composite Reliability (CR). The results for reliability and validity are presented for the overall sample. Cronbach’s Alpha is a way of assessing reliability by comparing the amount of shared variance, or covariance, among the items in a psychological test or questionnaire (Collins, Reference Collins2007). CR is an “indicator of the shared variance among the observed variables used as an indicator of a latent construct” (Fornell & Larcker, Reference Fornell and Larcker1981). In psychology, all of Cronbach’s Alpha and CR values are recommended to be higher than 0.600. Cronbach’s Alpha values of 0.60 to 0.70 are acceptable in exploratory research, while values between 0.70 and 0.90 are regarded as satisfactory (Nunally & Bernstein, Reference Nunnally and Bernstein1994). The Average Variance Extracted (AVE) and CR values are to be higher or close to 0.500 and 0.700, respectively, which corroborates convergent validity. Fornell and Larcker (Reference Fornell and Larcker1981) said that if AVE is less than 0.5, but CR is higher than 0.6, the validity of the construct is still adequate. The results for reliability and validity for the overall sample of 200 records from Poland are included in Tables 5.2 and 5.3. Unfortunately, in the first estimation, the chosen observed variables have not explained well the latent variables; therefore, model fitting was necessary. The preliminary conceptual model as unreliable has been changed, and Figure 5.2 includes the secondary estimated model covering the following hypotheses:
| Construct | Cronbach’s Alpha | rho_A | Composite Reliability | Average Variance Extracted (AVE) |
|---|---|---|---|---|
| AN | 0.454 | 0.553 | 0.696 | 0.447 |
| BA | 0.373 | 0.485 | 0.618 | 0.234 |
| BI | 0.678 | 0.704 | 0.795 | 0.442 |
| CHB | 0.609 | 0.617 | 0.775 | 0.468 |
| DN | 0.592 | 0.543 | 0.673 | 0.389 |
| HA | 0.272 | 0.521 | 0.508 | 0.404 |
| JU | 0.647 | 0.678 | 0.779 | 0.473 |
| MN | 0.599 | 0.697 | 0.752 | 0.443 |
| PBC | 0.685 | 0.70 | 0.806 | 0.512 |
| PR | −0.115 | 0.267 | 0.005 | 0.321 |
| SEC | 0.325 | 0.226 | 0.533 | 0.356 |
| SN | 0.410 | 0.717 | 0.638 | 0.377 |
| Construct | Cronbach’s Alpha | rho_A | Composite Reliability | Average Variance Extracted (AVE) |
|---|---|---|---|---|
| BI | 0.678 | 0.703 | 0.795 | 0.442 |
| CHB | 0.609 | 0.626 | 0.774 | 0.467 |
| DN | 0.592 | 0.540 | 0.665 | 0.386 |
| JU | 0.647 | 0.677 | 0.779 | 0.473 |
| MN | 0.599 | 0.695 | 0.751 | 0.443 |
| PBC | 0.685 | 0.701 | 0.806 | 0.512 |

Figure 5.2 The final model with estimated coefficients (sample size: 200 records from Poland).
H5: Perceived Behavioral Control has a positive impact on Behavioral Intention (BI).
H6: Perceived Behavioral Control has a positive impact on Cyber Hygiene Behavior (CHB).
Path coefficients and R2 constructs are included in Table 5.4.

Table 5.4Long description
A table gives the relationships between six constructs represented by the following codes, B I, C H B, D N, J U, M N, and P B C, along with R 2 values.
The only R square values given are for B I and C H B, and they are 0.350, and 0.246 respectively.
B I correlates with D N, M N, and P B C with path coefficients, 0.376. 0.142, 0.290. and 0.376, respectively.
C H B correlates with B I and J U with path coefficients, 0.169 and 0.304, respectively.
The goodness of the model is estimated by the strength of each structural path, determined by the R2 value for the dependent variables (Jankelová, Joniaková, & Skorková, Reference Jankelová, Joniaková and Skorková2021). Generally, R2 is a statistical measure of the goodness of the fit of a regression model. For the dependent variables, the R2 value should be equal to or over 0.125 (Falk & Miller, Reference Falk and Miller1992). The results in Table 5.4 show that all R2 values are over 0.1.
The R2 ranges from 0 to 1, higher values indicating stronger explanatory power. As a general guideline, R2 values of 0.75, 0.50, and 0.25 can be considered substantial, moderate, and weak, respectively, in many social science disciplines (Hair Jr. et al., Reference Hair, Hult, Ringle, Sarstedt, Danks and Ray2021). But acceptable R2 values are based on the research context, and in some disciplines, an R2 value as low as 0.10 is considered satisfactory, that is, for the large sample size research, it is statistically significant, but substantively meaningless (Falk & Miller, Reference Falk and Miller1992).
Table 5.5 covers the Bootstrapping Path Coefficients values for the final model as well as the decisions on the proposed hypotheses’ acceptance or rejection.
| Hypothesis No | Hypothesis (impact direction à) | Original sample | Sample mean | Standard deviation | T-statistics | P values | Decision |
|---|---|---|---|---|---|---|---|
| H1 | BI à CHB | 0.169 | 0.174 | 0.078 | 2.175 | 0.030 | Accepted |
| H2 | DN à BI | 0.142 | 0.156 | 0.085 | 1.673 | 0.094 | Rejected |
| H3 | JU à CHB | 0.304 | 0.316 | 0.068 | 4.453 | 0.000 | Accepted |
| H4 | MN à BI | 0.290 | 0.293 | 0.065 | 4.476 | 0.000 | Accepted |
| H5 | PBC à BI | 0.376 | 0.378 | 0.063 | 6.019 | 0.000 | Accepted |
| H6 | PBC à CHB | 0.150 | 0.150 | 0.075 | 1.994 | 0.046 | Accepted |
The results of the tests indicate that the proposed constructs (i.e., JU, MN, PBC) have a weak impact on the intention and behavior of students (expressed as BI, CHB) to avoid the disinformation. If a P value is below a certain threshold, then the corresponding hypothesis is assumed to be supported. The threshold is usually 0.05 (Kock, Reference Kock2014). Therefore, in this research, hypotheses H1, H3, H4, H5, and H6 are supported, but hypothesis H2 is rejected. This means that: (1) Behavioral Intention (BI) impacts on Cyber Hygiene Behavior (CHB); (2) Justification (JU) has a positive impact on Cyber Hygiene Behavior; (3) Moral Norms (MN) have a weak impact on Behavioral Intention (BI); (4) Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI); and (5) Personal Behavioral Control (PBC) has an impact on Cyber Hygiene Behavior (CHB).
Next, the study estimated the conceptual model considering data from Romania. However, for these data also, the reliability and validity measures have low values (Table 5.6), and authors eliminated some variables from that model.
| Construct | Cronbach’s Alpha | rho_A | Composite Reliability | Average Variance Extracted (AVE) |
|---|---|---|---|---|
| AN | 0.453 | 0.461 | 0.733 | 0.480 |
| BA | 0.294 | 0.553 | 0.416 | 0.244 |
| BI | 0.707 | 0.721 | 0.808 | 0.459 |
| CHB | 0.487 | 0.505 | 0.703 | 0.381 |
| DN | 0.612 | 0.624 | 0.774 | 0.463 |
| HA | 0.045 | 0.711 | 0.508 | 0.593 |
| JU | 0.640 | 0.642 | 0.786 | 0.480 |
| MN | 0.705 | 0.726 | 0.819 | 0.535 |
| PBC | 0.674 | 1.104 | 0.711 | 0.400 |
| PR | 0.019 | 0.383 | 0.091 | 0.304 |
| SEC | 0.459 | −0.254 | 0.177 | 0.309 |
| SN | 0.470 | 0.537 | 0.674 | 0.391 |
Since the chosen observed variables have not explained well the latent variables, model fitting was necessary. The preliminary conceptual model as unreliable has been changed, and Figure 5.3 includes the secondary estimated model covering the following hypotheses:

Figure 5.3 The final model with estimated coefficients (sample size: 165 records from Romania).
H1: Behavioral Intention (BI) has a positive impact on Cyber Hygiene Behavior (CHB).
H2: Descriptive Norms (DN) have a positive impact on Behavioral Intention (BI).
H3: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).
H4: Moral Norms (MN) have a positive impact on Behavioral Intention (BI).
H5: Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI).
H6: Perceived Behavioral Control (PBC) has a positive impact on Cyber Hygiene Behavior (CHB).
The same reliability and validity verification was done for the Romania model (Table 5.7).
| Construct | Cronbach’s Alpha | rho_A | Composite Reliability | Average Variance Extracted (AVE) |
|---|---|---|---|---|
| BI | 0.707 | 0.726 | 0.808 | 0.458 |
| CHB | 0.543 | 0.559 | 0.812 | 0.685 |
| DN | 0.612 | 0.622 | 0.774 | 0.463 |
| JU | 0.640 | 0.677 | 0.783 | 0.476 |
| MN | 0.705 | 0.726 | 0.819 | 0.535 |
| PBC | 0.674 | 1.239 | 0.688 | 0.382 |
Path Coefficients and R2 constructs are included in Table 5.8.

Table 5.8Long description
A table gives the relationships between the 6 constructs and the corresponding R square values. Below are the details.
The R square values are given only for B I and C H B they are 0.376, and 0.333, respectively.
B I correlates with D N, M N, and P B C with path coefficients negative 0.043, 0.509, and 0.233, respectively.
C H B correlates with B I, J U, and P B C with path coefficients 0.462, 0.050, and 0.185, respectively.
Table 5.9 covers the Bootstrapping Path Coefficients values for the final model as well as the decisions on the proposed hypotheses acceptance or rejection.
| Hypothesis No | Hypothesis (impact direction à) | Original sample | Sample mean | Standard deviation | T-statistics | P values | Decision |
|---|---|---|---|---|---|---|---|
| H1 | BI à CHB | 0.462 | 0.450 | 0.101 | 4.578 | 0.000 | Accepted |
| H2 | DN à BI | −0.043 | −0.007 | 0.098 | 0.434 | 0.665 | Rejected |
| H3 | JU à CHB | 0.050 | 0.076 | 0.085 | 0.589 | 0.556 | Rejected |
| H4 | MN à BI | 0.509 | 0.519 | 0.071 | 7.131 | 0.000 | Accepted |
| H5 | PBC à BI | 0.233 | 0.232 | 0.090 | 2.591 | 0.010 | Accepted |
| H6 | PBC à CHB | 0.185 | 0.191 | 0.077 | 2.413 | 0.016 | Accepted |
The results of the tests indicate that the proposed constructs (i.e., MN, PBC) have a weak or moderate impact on the intention and behavior of students (expressed as BI, CHB) to avoid the disinformation. In this research, the threshold of the P value is also 0.05. Therefore, in this research, hypotheses H1, H4, H5, and H6 are supported, but hypotheses H2 and H3 are rejected. This means that: (1) Behavioral Intention (BI) impacts on Cyber Hygiene Behavior (CHB); (2) Moral Norms (MN) have a moderate impact on Behavioral Intention (BI); (3) Perceived Behavioral Control (PBC) has a weak positive impact on Behavioral Intention (BI); (4) Perceived Behavioral Control (PBC) has a weak positive impact on Cyber Hygiene Behavior (CHB).
Conclusion
Cyber disinformation is a complex and concerning phenomenon. Successful disinformation campaigns can have a significant negative effect on democratic values and institutions. Defending democracy in the digital age requires a complex approach. The individual behavior of users can influence the spread and effects of the phenomenon.
This chapter argued that users’ behavior plays an essential role in this phenomenon and aimed to identify factors that impact on users’ BIs and CHB. The chapter integrated the ETPB and a Structural Equation Model, realized through PLS–SEM, applied to the cyber disinformation phenomenon. The analysis of the self-assessment survey on disinformation risk perception and control revealed that responses are highly similar, with a correlation rate of 0.7854. The research revealed the applicability of the TPB model and found that MN and PBC have an impact on BI and CHB.
The findings of this chapter provide valuable insights that can be used to improve the overall responses to the phenomenon, such as policies, programs, and clinics, and to elaborate educational materials. To effectively address the phenomenon’s relevant vectors, tactics, or methods, there is a clear need for a complex strategy, with multiple components, including research, to better understand the phenomenon’s attributes and the behavior of users; frequent risk assessments; increased empowerment of people to detect and report disinformation; improved fact-checking procedures; enhanced international anti-disinformation enforcement and cooperation; technical assistance programs; better defined responsibility for secondary liability; awareness raising and education programs, with a view to improve the critical thinking abilities of people.













