Hostname: page-component-54dcc4c588-trf7k Total loading time: 0 Render date: 2025-09-17T18:22:10.729Z Has data issue: false hasContentIssue false

Risk aversion and social influence

Published online by Cambridge University Press:  15 September 2025

Armina Karapetian
Affiliation:
University of Nevada, Reno, NV, USA
James Sundali*
Affiliation:
University of Nevada, Reno, NV, USA
Federico Guerrero
Affiliation:
University of Nevada, Reno, NV, USA
Alexis Hanna
Affiliation:
University of Nevada, Reno, NV, USA
Garret Ridinger
Affiliation:
University of Nevada, Reno, NV, USA
*
Corresponding author: James Sundali; Email: jsundali@unr.edu
Rights & Permissions [Opens in a new window]

Abstract

Amnon Rapoport made seminal contributions to research on investment decision-making and individual decision-making under risk. To build on his seminal work, this paper explores the impact of social influence on risk-taking. First, to build predictions for experimental testing, we modify a standard expected utility model by introducing a social norm variable. Using a standard 10-decision paired lottery choice task, we report the results from three experiments with different manipulations to test whether social influence information affects subjects’ own lottery choices. In Experiment 1, we find that participants are more likely to switch to choosing the risky option earlier if they are told that a large majority (>75%) of a large group (N = 100) of others have also chosen the risky option in the past. In Experiment 2, we find there is no effect if the social influence prompt is framed as a small group (N = 10) or the choice of one (N = 1) successful lottery participant, but there is an effect when participants are provided information about the consistently risky choices of one (N = 1) person in the past. In Experiment 3, using an in-person subject pool, we find some mixed effects on risk-taking when the social information is framed as a small group (N = 10) of peers (other students). Altogether, this paper demonstrates that social influence can impact risk-taking in line with a socially normed expected utility model.

Information

Type
Special Issue Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of the Economic Science Association.

“‘There is nothing as disturbing to one’s well-being and judgment as to see a friend get rich.’ Unless it is to see a non-friend get rich.” (Aliber & Kindleberger, Reference Aliber and Kindleberger2015)

‘Ok listen up r−−−−−−…I know it’s Friday and some of your short-term monkey brains are thinking about bailing on your brothers. Your paper hands are beginning to cramp up. I get it. BUT WE CANNOT SELL! There are still MASSIVE amounts of shorts on $GME. Still, well over 113% of total shares floated (from S3 Partners). Some old shorts have gotten out, but many NEW shorts have taken their place in the past couple of days hoping that $GME will die out.’ – from Wall Street Beats user ‘HoldUnitYouDie’17.5k upvotes. (Daniel, Reference Daniel2021)

1. Introduction

As the editors of this special issue note, much of Professor Amnon Rapoport’s voluminous work informed the areas of coalition formation, bargaining, social dilemmas, behavioral operations management, queues, and behavioral game theory. Yet some of his most cited work examined critical aspects of how individuals make investment decisions under risk (Benzion et al., Reference Benzion, Rapoport and Yagil1989, 1,351 citations, #1 most cited Rapoport paper according to Google Scholar; Kroll et al., Reference Kroll, Levy and Rapoport1988a, 279 citations, #7; Kroll et al., Reference Kroll, Levy and Rapoport1988b, #32, 136 citations).Footnote 1

For instance, a sample of Rapoport’s work in financial decision-making examined: (1) how economic and finance students formed discount rates (Benzion et al., Reference Benzion, Rapoport and Yagil1989); (2) whether highly motivated subjects could make asset allocation decisions consistent with predictions of the Capital Asset Pricing Model (CAPM; Kroll, Levy, Rapoport, et al., Reference Kroll, Levy and Rapoport1988a); (3) whether highly motivated subjects could form portfolios that were mean-variance efficient (Kroll, Levy, Rapoport, et al., Reference Kroll, Levy and Rapoport1988b); (4) whether different forms of expected utility models could account for the behavior of subjects in a multi-stage betting game (MBG) that was similar to saving and investing for retirement (Rapoport et al., Reference Rapoport, Zwick and Funk1988); and (5) the impact of favorable versus unfavorable levels of capital and investment conditions on portfolio selection (Rapoport, Reference Rapoport1984).

Rapoport’s work focused on testing the effectiveness of rational models to account for subject behavior. A summary of the results from Rapoport’s experiments produced mixed results (at best) in validating the efficacy of rational models in predicting the behavior of subjects in various tasks of financial decision-making. In response, Rapoport et al. (Reference Rapoport, Zwick and Funk1988) wrote:

What implications do these negative findings have for subsequent attempts to formulate the regularities in portfolio selection reported in MBG studies? One could, of course, adhere to the expected utility approach but investigate other utility functions…A second possibility would be to develop and test models which restrict the investor’s beliefs such as the mean-variance model of Markowitz and some of its variations. A third possibility, which would seem consistent with the experimental findings on decision behavior under uncertainty that are well documented in the psychological literature (Hogarth Reference Hogarth1981; Kahneman et al., Reference Kahneman, Slovic and Tversky1982; Slovic et al., Reference Slovic, Fischhoff and Lichtenstein1977) would be hybrid models which, rather than restricting either tastes or beliefs, restrict, them both.

In line with this hybrid approach, we present a paper that examines the impact of social influence on risk aversion. While Amnon likely would have been delighted with a paper on risk aversion, he may have had some reservations about an elusive variable like social influence. Many traditional rational models assume individuals only care about their own payoffs when making decisions, but rational choice theory only requires that preferences are complete and transitive. As a result, the model we introduce is a rational model in the spirit of Amnon Rapoport, but our model is a variation that incorporates social influence in risky decisions.

There can be no doubt about the impact of Amnon’s social influence in the domains he published in, and more importantly, on the many graduate students and colleagues that he mentored and loved. Beyond Amnon Rapoport’s personal influence, the broader effects of social influence are seemingly ubiquitous. Modern social media platforms provide the means for legions of influencers to earn money simply by sharing their opinions with followers. Moreover, financial markets recorded the impacts of influencers long before technology allowed for the instantaneous and widespread sharing of one’s thoughts. From the tulip mania of the 1600s, to the real estate and stock market bubbles of the 2000s, financial influencers have helped to create conditions for market bubbles and Ponzi schemes over the course of time (Aliber & Kindleberger, Reference Aliber and Kindleberger2015).

Although the impact of social influence on financial markets is generally acknowledged, as evidenced by abundant financial opinions offered daily, systematic research on the mechanisms of social influence on risk-taking and financial decision-making is limited. To fill this gap, we examine whether individual risk-taking in simple lotteries can be influenced by information about the risky lottery choices of others. Standard expected utility theory (EUT), as well as behavioral models such as prospect theory (PT) and rank-dependent utility theory (RDU), predict that individual lottery choices will not depend on others’ choices. In contrast, we show that a recent model of social norms can predict changes in individual lottery choices based on the choices of others (McBride & Ridinger, Reference McBride and Ridinger2021). To test these different predictions, we experimentally assessed whether subjects’ levels of risk aversion can be altered by manipulating the social influence framing of other people’s prior choices in the Holt and Laury (Reference Holt and Laury2002) paired lottery choice task. Specifically, we conducted three experiments – two online experiments using Amazon’s Mechanical Turk (MTurk) and one in-person laboratory experiment. Our primary findings are as follows.

In Experiment 1, we find that participants are more likely to make risky choices earlier and more frequently if they are told that a large majority (>75%) of a large group (N = 100) of others have chosen the risky option in the past (versus the safe option). In Experiment 2, we find there is no effect on subjects’ choices if they are provided with social information about the prior choices of a much smaller group (N = 10) or one successful lottery participant (N = 1), but social information about the consistently risky choices of one prior person (N = 1) did have an effect.

To further test social influence effects in the physical presence of others, Experiment 3 used an in-person subject pool. We find that participants who were provided statements about the prior lottery choices of a group (N = 10) of peers (other students) tended to make riskier choices earlier than participants in the control group who had no social information. However, a group of participants who received more ‘extreme’ social information (i.e., large proportions of their peers had chosen the risky option in the past) did not differ from the control group, suggesting that social information deviating too far from norms may not elicit the same effects.

The rest of the paper is organized as follows. We begin with a literature review and present a basic economic model to show how social influence can affect rational choice. Our literature review suggests some of the psychological mechanisms at play with social influence leading to our hypotheses. Three experiments are then presented, each testing the impact of social influence on lottery choices with different social manipulations, followed by data analysis and discussion of the results. Finally, we conclude with a story on the effect of social influence on Amnon Rapoport.

2. Literature review

2.1. Decisions under risk

There exists a large literature studying individual heterogeneity in risk attitudes (Bruhin et al., Reference Bruhin, Fehr‐Duda and Epper2010; Dohmen et al., Reference Dohmen, Falk and Huffman2011). Research has examined numerous personal factors that can impact risk preferences, including gender (Charness & Gneezy, Reference Charness and Gneezy2012; Eckel & Grossman, Reference Eckel and Grossman2008), income (Holt & Laury, Reference Holt and Laury2002), and age (Dohmen et al., Reference Dohmen, Falk and Huffman2011). However, less is known about whether the risk preferences of others can also influence individual risk attitudes and behavior.

According to standard expected utility theory (EUT), information about the choices of others should have no effect on individual risk-taking. Since the underlying probabilities and payoffs are common knowledge, information about the frequency of choices of others should not matter. There are several notable behavioral models of decision-making, with rank-dependent utility (RDU) theory (Quiggin, Reference Quiggin1982, Reference Quiggin1985) and prospect theory (PT; Tversky & Kahneman, Reference Tversky and Kahneman1992) arguably being the most prominent. Each theory assigns different types of individual weights to the underlying probabilities of risky choices, but neither theory incorporates knowledge of the behavior of others. As a result of that omission, EUT, RDU, and PT all predict that information about others’ lottery choices should have no effect on observed behavior.

Our work relates to literature studying social reference points in individual risk-taking, which has examined how peer outcomes may influence individuals’ choices. For instance, people do compare themselves to others, and these comparisons can lead to preferences for fair outcomes (Bolton & Ockenfels, Reference Bolton and Ockenfels2000; Fehr & Schmidt, Reference Fehr and Schmidt1999). While the vast majority of research has focused on fair outcomes in strategic games,Footnote 2 recent research has begun to examine how social preference may influence decisions under risk (Bault et al., Reference Bault, Coricelli and Rustichini2008; Gantner & Kerschbamer, Reference Gantner and Kerschbamer2018; Linde & Sonnemans, Reference Linde and Sonnemans2012; Lindskog et al., Reference Lindskog, Martinsson and Medhin2022; Schmidt et al., Reference Schmidt, Friedl and Eichenseer2021; Schwerter, Reference Schwerter2024). Overall, this research has found evidence that the comparison of outcomes may influence risky decisions. For example, Lindskog et al. (Reference Lindskog, Martinsson and Medhin2022) introduce a theoretical model of outcome-based comparisons using inequity aversion with risky choice. By making different parameter assumptions, individuals can be motivated by rank (i.e., getting ahead of others) or fairness (i.e., inequity aversion). They find experimental evidence of preferences for wanting to get ahead of others.

Although the social reference point literature has shown that the outcomes of others’ risky choices can influence individual behavior, research remains open as to what types of outcome preferences explain the behavior. Linde and Sonnemans (Reference Linde and Sonnemans2012) found that prospect theory with social reference points is inconsistent with their results, but the results from Schwerter (Reference Schwerter2024) are consistent with that model. Importantly, much of this research has focused on outcome-based explanations for the social influence effect on risky choices. Considerably less attention has been paid to studying if individual risky choice is motivated by the choices of others, independent of the outcomes.

A notable exception is Lahno and Serra-Garcia (Reference Lahno and Serra-Garcia2015), who conducted an experiment where individuals first made their own lottery choices and then repeated the lottery choices in pairs. In each pair, one subject was the first mover, and the other the second mover. Using the strategy method, the second mover could condition their choice on the choice of the first mover. The results show that a higher proportion of subjects conditioned their choice on the actual choice of the first mover, as compared to a treatment condition when the choice of the first mover was randomly determined. That is, the peer effect increases significantly when the actual peer choice is observed by another. Additionally, people were more likely to imitate safe choices compared to risky choices.

While our study adds to this line of research, our approach differs in that we do not pair subjects with another player. Instead, subjects receive empirical information about the prior choices of others. Across conditions, empirical information differs in the framing and sample size received by subjects. Additionally, we do not use the strategy method. One issue with the strategy method when studying social influence is that the structure of the questions may induce an experimental demand effect.Footnote 3 As a result, it is difficult to determine if differences in individual behavior are due to social influence or due to elicitation.

2.2. Holt and Laury lottery choice task

Holt and Laury (Reference Holt and Laury2002) developed an experimental procedure to measure risk aversion that has been widely used in experimental studies (Anderson & Mellor, Reference Anderson and Mellor2008; Lönnqvist et al., Reference Lönnqvist, Verkasalo and Walkowitz2015) and that we use here. In this task, participants are presented with 10 lottery choices, shown in Table 1, with the following structure for each question:

Option A: (50% chance of winning $2.00; 50% of winning $1.60)

Option B: (50% chance of winning $3.85; 50% chance of winning $0.10)

Table 1 The 10 paired lottery choice decisions with low payoffs, Holt and Laury (Reference Holt and Laury2002)

Subjects are asked to choose between Option A, a safer choice, and Option B, a riskier choice. The probabilities of winning the high payoff ($3.85) in Option B are systematically increased from question one (10% chance of winning $3.85) to 10 (100% chance of winning $3.85). The choices were developed to measure relative risk aversion, and the total number of safe choices can serve as a simple summary statistic of relative risk aversion.

In the first four lottery choices, the expected value of the payoff from Option A is higher than Option B, so according to EUT, only risk-seeking individuals would choose Option B. By the fifth lottery choice the expected value of Option B exceeds Option A, so a risk-neutral individual would select Option B in lotteries five to ten. Since individuals have different risk preferences, the task is designed to assess at which point an individual will switch from the safer to the riskier choice. We next provide a theoretical explanation using norm theory to show how participant choices in this task can be affected by social influence.

2.3. Norm theory

One theory that can predict changes in risk attitudes due to the behavior of others is social norms. Social norms can be broadly defined as a set of informal rules that govern the behavior of individuals within groups or communities. In examining the role of norms in decision-making, two types of norms have been identified in prior literature: injunctive norms (what someone should do) and descriptive norms (what most others do) (Bicchieri, Reference Bicchieri2005; Cialdini et al., Reference Cialdini, Reno and Kallgren1990).

The effect of norms on behavior is likely context dependent (Kimbrough & Vostroknutov, Reference Kimbrough and Vostroknutov2016) and affected by frames (Chang et al., Reference Chang, Chen and Krupka2019). Context is important because the same social behavior may be considered appropriate in some situations and not appropriate in others. As a result, social norm adherence is likely influenced by the decision frame (Tversky & Kahneman, Reference Tversky and Kahneman1981). Different frames of risky choices mean differing presentations of logically equivalent choices. A meta-analysis by Kühberger (Reference Kühberger1998) shows that positive framing of risky choices (how many people will be saved) leads to higher risk aversion in choice, while negative framing (how many people will be lost) leads to higher risk-seeking.

The effect of the framing mechanism can be explained by the differences in encoding and processing of information, but it can also be argued that framing works because frames evoke different norms (Chang et al., 2019). Further, people can tailor their behavior based on how a particular situation is categorized – or framed (Bicchieri, Reference Bicchieri2005). When individuals experience a new situation, they classify and search for a similar situation or prototype with similar salient characteristics. Once a comparable situation has been identified, it leads to activating a type of behavior that is considered most ‘normal’ for this situation. According to this model, when faced with a risky choice, individuals may search for a similar choice they have either experienced before or learned about from others. The different contexts of a risky choice may lead individuals to make different choices depending on which norm the individual believes is relevant.

This type of norm theory is consistent with empirical research studying how individual risk preferences can differ within subjects across different domains (Schoemaker, Reference Schoemaker1990) and elicitation methods (Bauermeister et al., Reference Bauermeister, Hermann and Musshoff2018; Dulleck et al., Reference Dulleck, Fooken and Fell2015). These variations in risky decisions may make certain types of norms more likely to be activated, leading to differences in risk attitudes. Additionally, according to social norm theory, individuals may have different propensities to follow norms (Bicchieri, Reference Bicchieri2005; Kimbrough & Vostroknutov, Reference Kimbrough and Vostroknutov2016; McBride & Ridinger, Reference McBride and Ridinger2021). For example, subjects with stronger rule-following preferences tend to put more weight on the norm component of their utility (Kimbrough & Vostroknutov, Reference Kimbrough and Vostroknutov2016). Additionally, descriptive and injunctive norms have been found to influence behavior (Bicchieri, Reference Bicchieri2005; Kimbrough & Vostroknutov, Reference Kimbrough and Vostroknutov2016; Chang et al., Reference Chang, Chen and Krupka2019; McBride & Ridinger, Reference McBride and Ridinger2021).

To generate predictions about how knowledge of others’ risk choices may influence individual risk choices, we use a model of adherence to social norms introduced by McBride and Ridinger (Reference McBride and Ridinger2021). Building from Bicchieri (Reference Bicchieri2005) and Kimbrough and Vostroknutov (Reference Kimbrough and Vostroknutov2016), McBride and Ridinger (Reference McBride and Ridinger2021) introduce a model of norm compliance that includes both injunctive and descriptive norms, as well as allowing for individual variation in adherence to norm-following. In the model, an individual $i$ forms a belief ${\beta _i} \in \left[ {0,1} \right]$ about the proportion of the relevant population believed to follow the social norm $\hat s$. Individuals receive disutility from deviating from the norm. The utility function of individual $i$ is defined as follows:

\begin{equation*}{u_i}\left( {{s_i}} \right) = E{U_i}\left( {{s_i}} \right) - {k_i}\left( {{\beta _i}} \right)g\left( {\left| {{s_i} - \hat s} \right|,{\beta _i}} \right),\end{equation*}
\begin{equation*}g\left( {\left| {{s_i} - \hat s} \right|,{\beta _i}} \right) = \left\{ {\!\!\!\!\begin{array}{*{20}{c}} {g\left( {\left| {{s_i} - \hat s} \right|} \right),\,\,if\,{\beta _i} \gt \hat \beta } \\ {\,\,\,\,\,\,0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,if\,{\beta _i} \leq \hat \beta } \end{array}} \right.\end{equation*}

where $E{U_i}\left( {{s_i}} \right)$ is expected utility individual $i$ receives from action ${s_i}$, and ${k_i}\left( {{\beta _i}} \right) \geq 0\,$is the weight i places on the norm, which is a function of the norm compliance of others. In the norm function g() it is assumed $g\left( 0 \right) = 0$ and $g{^{\prime}} \gt 0$. The term $\,\hat \beta $ is assumed to be the lowest proportion in which i would recognize the injunctive norm is $\hat s$. In the model, for each individual i there exists a threshold belief $\beta _i^* \geq 0$ that will determine whether the individual will conform to the injunctive norm $\hat s$. That is, an individual will adhere to the injunctive norm if and only if ${\beta _i} \geq max\left\{ {\beta _i^*,\hat \beta } \right\}$ .

When individuals learn about the risk choices of others, this may lead to a change in their belief, ${\beta _i}$, about the proportion of others making that choice. This change in their belief could lead to differences in their own choices from two potential channels. First, the change in belief may alter which action they believe they should choose (i.e., what they think the norm suggests they choose), $\hat s$. Second, it also could change their individual sensitivity to following the norm, ${k_i}\left( {{\beta _i}} \right)$. Either way, according to this model, increases in the proportion of others making risk-seeking choices can lead to an individual making more risk-seeking choices themselves.

Using a modified linear version of the model from McBride and Ridinger (Reference McBride and Ridinger2021) we derive three cases for different values of $\beta _i^{}$ and ${k_i}$ that determine whether a person will comply with the social norm (see Appendix 1 for details). These three cases highlight how changes in belief about others choosing a risky option (Option B) can potentially change individual behavior in the Holt and Laury (Reference Holt and Laury2002) task. The model predicts that switching should occur more frequently from Option A to Option B if the difference in expected utility between Option A to Option B is smaller. This suggests that for the same level of norm salience ${k_i}$ and the same belief about others choosing to switch ${\beta _i}$, there should be more switching in later questions as the expected utility difference between the safe choice (Option A) and the risk choice (Option B) declines. Additionally, individuals who have higher norm salience ${k_i}$ should switch earlier compared to those who have lower norm salience. Finally, higher beliefs about the conformity of others should lead to more switching, even if they prefer in expected utility terms to not switch.

A key component in the model is threshold $\hat \beta $, which determines whether the injunctive norm factors into an individual’s utility function. One interpretation is that $\hat \beta $ represents the minimum level of others’ conformity that is needed for a person to recognize the norm. When this threshold is high, it takes much higher beliefs about the conformity of others for an individual to potentially conform themselves.

2.4. Framing and social norms

According to the above model of social norms, individuals will follow the norm if they care about norm following, recognize the norm, and hold beliefs about others’ conformity at a sufficiently high level. While the determinants of these individual factors remain an open question, they are likely in part determined by information about the behavior of others. One factor that may influence how that information factors into the individual utility function may be how that information is framed.

Prior work has shown that framing can have a large impact on individual choices (Kahneman & Tversky, Reference Kahneman and Tversky1979; Lieberman et al., Reference Lieberman, Duke and Amir2019). For example, the default effect is the empirical observation that many individuals do not change from the default choice (Johnson & Goldstein, Reference Johnson and Goldstein2003). One theory is that the default choice signals to individuals that the default is the correct choice (Krijnen et al., Reference Krijnen, Tannenbaum and Fox2017). That is, the default choice may be viewed as the injunctive norm. According to this view, how the information of others’ behavior is framed may influence what individuals recognize as the norm. In the Holt and Laury (Reference Holt and Laury2002) task, individuals can choose either Option A or Option B. If norm recognition is influenced by the default choice, then framing the behavior of others as to how many chose Option A may have a different effect on norm recognition compared to framing the behavior of others as to how many chose Option B. In other words, framing the information as to how many people chose Option A may signal that the norm is Option A. As a result, it may take higher levels of conformity of others to lead individuals to switch to Option B. In comparison, framing the choice as to how many selected Option B may signal that the norm is Option B, which may lead to people switching to Option B.

3. Hypotheses

We offer three hypotheses regarding the impact of social influence on subjects making Holt and Laury (Reference Holt and Laury2002) lottery choices.

Hypothesis 1: Expected utility theory suggests that social information about the choices of others should have no effect on individual lottery choices.

Hypothesis 2: Social norms suggest that subjects will be more likely to choose risky lottery options when others have also done so.

Hypothesis 3: Default choice framing suggests that subjects will switch to the risky choice earlier (later) if social information is framed as to how many people chose the risky (safe) choice.

4. Experiments

4.1. Experiment 1

4.1.1. Study design

This study was an online experiment with participants from Amazon Mechanical Turk (MTurk). (Full questionnaires and participant instructions can be found in Appendix 2.) Participants were randomly assigned to either the control group or one of two treatment groups. Each group received the same 10 Holt and Laury (Reference Holt and Laury2002) paired choice lottery questions, with the treatment groups receiving additional ‘social influence’ information with each question.

Specifically, as shown in Table 2, participants in the control group were presented with each lottery question and no additional information. Participants in the first treatment group (i.e., the Safe Framing group) were presented with the number of participants in a prior group of 100 who chose the safe option for each question (Option A). In contrast, participants in the second treatment group (i.e., the Risky Framing group) were presented with the number of participants in a prior group of 100 who chose the risky option for each question (Option B). To avoid deception, the actual number of prior choices for each question was obtained from a pre-test conducted with 312 university undergraduate students.

Table 2 Experiment 1 social influence information by question

Following the lottery choice task, all participants completed questionnaires assessing their personality, social comparison orientation, and demographics. For personality, subjects completed the Mini-IPIP questionnaire (Goldberg, Reference Goldberg1992). To measure social comparisons, subjects completed the Iowa–Netherlands Comparison Orientation survey (INCOM; Gibbons & Buunk, Reference Gibbons and Buunk1999). For all analyses and results using the personality data, see Appendix 3.

4.1.2. Recruitment, sample characteristics, and payment

Participants were recruited via MTurk. Participants were at least 18 years old and located in the United States. N = 315 subjects initially participated in this study. Subjects were excluded if they did not complete all 10 core lottery choice questions or failed two attention check questions embedded in the first part of the survey; the final sample consisted of N = 237 observations (38% female), with N = 76 (41% female) in the control group, N = 81 (35% female) in the Safe Framing group, and N = 80 (38% female) in the Risky Framing group. Three percent of participants were aged 18 to 24; 44% were aged 25 to 34; 28% were aged 35 to 44; 17% were aged 45 to 54; and 7.5% were aged 55 or older.

Participants were paid $1.00 for participation, as well as a bonus payment contingent on their choice (A or B) on one randomly selected lottery choice question. The chosen lottery question was determined using a random number generator, and bonus payoff ranged from $1.60 to $3.85.

4.1.3. Data analysis and results

Hypothesis 1–3 were tested using multiple techniques to robustly examine the effects of social influence on risky choices. As an initial visual inspection of results, Fig. 1 presents the trajectories of safe choices across lottery choice questions (Q1–Q10) by group (Control group, Safe Framing group, and Risky Framing group). Beginning with question six, there is a clear separation between the proportions of safe choices across treatment groups, with a much higher proportion of safe choices in the Control group compared to the Risky Framing group.

Fig. 1 Slopes of safe choices by group in Experiment 1

More formally, to first assess whether individuals respond differently depending on whether or not they receive social information (Hypothesis 1 versus Hypothesis 2), we regressed the total number of safe choices on treatment group (Control group as reference). We controlled for gender by adding dummy-coded sex as a predictor (with men as the reference group). The total number of safe choices was not significantly related to age (p = 0.934), so we did not include age as a control variable. In the linear regression, the Risky Framing group predicted the number of safe choices (p < 0.001). Specifically, compared to the Control group (M = 7.20, SD = 2.25), the Risky Framing group had significantly lower numbers of safe responses (M = 5.85, SD = 2.41), rejecting Hypothesis 1 and supporting Hypothesis 2.

Next, we used a multilevel modeling approach (also called random coefficient modeling, or hierarchical linear modeling) to test for differences in the trajectory of safe choices across the 10 lottery questions, using the treatment group as a cross-level moderator. For this analysis, we treated the ‘person’ as a grouping (i.e., Level 2) variable, which allowed us to account for non-independence of responses at the person level. This approach is similar to using OLS regression with clustered standard errors at the individual level, but multilevel modeling is especially appropriate in this case to handle an interaction effect across levels (i.e., using a between-person variable as a moderator for within-person trajectories of responses).

To employ multilevel modeling, we followed Bliese and Ployhart’s (Reference Bliese and Ployhart2002) model comparison strategy, which uses a series of increasingly complex models, each one treating item responses as nested within persons to account for shared variance in responses at the person level. The results for all multilevel model comparisons for Experiment 1 are shown in Table 3. First, we examined whether people tended to have different answer choices for Question 1 (coded as Question 0 for analyses), which represents the intercept value for each person. To do so, we compared an OLS regression model to a random intercept model. The model comparison test was significant (p < 0.001), which indicated a random intercept model is appropriate (i.e., people tend to have different intercept values, or different responses to the first question). Next, we tested for different trajectories of responses across people (i.e., slopes) by comparing the random intercept model to a random slope model. The model comparison test was significant (p < 0.001), which indicated a random slope model was appropriate (i.e., people tended to have different trajectories of choosing safe versus risky choices across the course of the 10 questions).

Table 3 Multilevel modeling results: Effects of social influence on safe response tendencies across questions in Experiment 1

Note:

* p < .1. **p < .05. ***p < .01. SF = Safe Framing group, RF = Risky Framing group, Int = Intercept, RI = Random Intercept, RS = Random Slope. In each model, the outcome was participants’ answer choice in a given question, treating responses as nested within persons. Question Number was coded from 0–9. Model 1 was a null random intercept model (random intercept model of choices across question numbers, without additional predictors of intercept variance), Model 2 added a random slope, Model 3 added predictors of intercept variation (i.e., variation in responses on the first question), and Model 4 added cross-level interaction effects to predict slope variation (i.e., variation in response tendencies across questions). All models were estimated using the lme function in R, which uses a linear mixed effects model with nested random errors that allows within-group errors to be correlated. Standard errors are in parentheses. The lme function calculates standard errors for random effects using the delta method (see Oehlert, Reference Oehlert1992).

After establishing that random intercepts and random slopes were appropriate, we added predictors to explain variation in those intercepts and slopes. Specifically, we examined whether social influence information explained variance in answer choices, both on the first question (group predicting intercept variation) and the overall trajectory of responses across questions (group moderating slope variation to test whether groups had significantly different answer choice slopes from each other). Indeed, in the random intercept model, we found that the Risky Framing group had significantly different answers to the first question on average compared to the Control group (b = − .106, p = 0.006). Likewise in the random slope model, the Risky Framing group had a significantly different slope of safe choices than the Control group (Q*Risky b = − .027, p = 0.008). However, the Safe Framing group was not significantly different from the control group. Thus, we found support for rejecting Hypothesis 1 and supporting Hypothesis 2 using multiple analytic approaches, but only when social influence information was provided with reference to the ‘risky choice’ (i.e., the number of people who chose Option B for each question).Footnote 4

Next, to test Hypothesis 3 about framing effects, we releveled the group variable to treat the Safe Framing group as reference and re-ran the multilevel models. Results indicated that the Safe Framing group had a significantly different slope than the Risky Framing group (Q*Risky b = − .021, p < .05). Hypothesis 3 predicting that safe versus risky framing would elicit different risk tendencies in participants was supported. Altogether, the slope of responses for the Risky Framing group shown in Fig. 1 was significantly different than the slopes of both the control group (supporting Hypothesis 2) and the Safe Framing group (supporting Hypothesis 3), but the Safe Framing group was not significantly different than the Control group.

Finally, as a robustness check, we also conducted regressions using a random effects logit model and found similar results (see Appendix 4, Table A7). This modeling approach allowed us to test marginal effects comparing each group to each other by question. These effects are illustrated in Fig. 2. The marginal effects show that the control group was more likely to choose the safe option (Option A) compared to the Safe Frame group for questions 6 to 10. This difference for questions 6 to 10 was weakly significant at the 10% level. Additionally, the Risky Frame group was significantly less likely to choose the safe option (Option A) for questions 5 to 10 compared to the Safe Frame group. This difference was significant at the 10% level for questions 5 to 6 and significant at the 5% level for questions 7 to 10. These results support Hypotheses 2 and 3 for later questions.

Note: Marginal effects estimated using random effects logit (see Appendix Table A7)

Fig. 2 Average marginal effects of control and risky frame treatments relative to the safe frame treatment in choosing option a by question number

4.1.4. Experiment 1 discussion

There are several notable findings from Experiment 1. First, more participants are willing to choose the risky lottery option earlier on when they are informed that a large sample of others also chose the risky option in the past. In the Risky Framing group, the phrasing of the social comparison statement was: ‘On average out of 100 people who took this survey in the past, X chose Option B.’ This statement had only a small effect on the number of safe choices until the threshold number reached 55%, at which point there is a noticeable difference in safe choices between the control group and Risky Framing group (as seen in Fig. 1).

Second, there is a stronger, statistically significant impact on risky choice tendencies when people are told the percentage of others who made risky choices versus being told the percentage who made safe choices. For example, our results show that people are more likely to choose the risky lottery choice – Option B – when told that 75% of people chose B in the past, compared to being told that 25% of people chose A in the past, even though the information is technically the same. This is consistent with Hypothesis 3, which suggests that the default framing of Option B may signal that Option B is the injunctive norm.

However, although Experiment 1 provided important initial evidence that social influence information can influence risk-taking in decisions, this experiment had some limitations. First, both the Risky Framing group and Safe Framing group received information about the answer tendencies in a group of 100 people. Based on the manipulation, it is unclear whether people’s answer choice tendencies are only influenced by large groups, or whether they may also be influenced by smaller numbers of people or even a single person. Second, there was no information provided about the outcomes, or payoffs, for those who made different responses in the past. In this way, the phrasing of social information gave no indication of whether the relevant social comparison was downward, lateral, or upward. To address these limitations, Experiment 2 included several test groups, including different manipulations of risky answer choice tendencies, variations in the size of the social groups, and varying information about prior payoffs.

4.2. Experiment 2

4.2.1. Study design

The survey used for Experiment 2 was the same as in Experiment 1. After being randomly assigned to conditions, MTurk participants completed the same four sections of a questionnaire, including the 10 Holt and Laury (Reference Holt and Laury2002) lottery choice questions, the Mini-IPIP personality questionnaire, the Iowa–Netherlands Comparison Orientation survey (INCOM), and demographic information. The questionnaire was designed using Qualtrics and can be found in the Appendix.

The conditions in Experiment 2 differed from those used in Experiment 1. In this study, participants were randomly assigned to one of five groups, including a control group and four treatment groups. Each group received the same 10 lottery questions, but the treatment groups were provided additional ‘social influence’ information with each question. Specifically, Experiment 1 demonstrated that individuals can be socially influenced if they are told the actual proportions of risky choices from a large sample (100 prior participants). In Experiment 2, we wanted to explore whether individuals can be similarly influenced by the choices of a much smaller group of prior participants, by the choices of individuals who deviate from typical patterns, by the choices of one individual, as well as by the choices of one successful individual.

As a baseline, participants in the Control group were presented each lottery question with no additional information. Participants in the Small Risky Frame group were presented with the number of participants in a prior group of 10 who chose the risky option for each question (Option B). Participants in the Small Contrarian group were presented with the number of participants in a prior group of 10 whose choices deviate from the typical patterns of choosing the risky option (Option B). Participants in the Risky Person group were presented with the choices of one actual prior subject who consistently chose the risky option (Option B) for all lottery questions except the first one. Lastly, participants in the Risky Person with Payoff group were presented with the choice of one actual prior subject who chose a risky option (Option B) for all lottery questions except the first one and who received the highest available payment. The exact wording shown to participants in each group for each question is shown in Table 4.

Table 4 Experiment 2 treatments by question

4.2.2. Recruitment, sample characteristics, and payment

Participants were recruited via Amazon MTurk. Participants were at least 18 years old and located in the United States. N = 500 subjects initially participated in this study. Subjects were excluded if they did not complete all 10 lottery choice questions or failed two attention check questions; the final sample was N = 452 observations (50% female), with N = 93 (52% female) in the control group, N = 86 (53% female) in the Small Safe Frame group, N = 88 (49% female) in the Small Risky Frame group, N = 92 (49% female) in the Risky Person group, and N = 93 (48% female) in the Risky Person with Payoff group. About 0.5% of participants were aged 18 to 24; 24% were aged 25 to 34; 36% were aged 35 to 44; 22% were aged 45 to 54; and 17.5% were aged 55 or higher. Participants were paid a minimum guaranteed fee of $1.00 for participation and a bonus fee of up to $3.85 depending on their lottery choice on a randomly chosen question, as in Experiment 1.

4.2.3. Data analysis and results

We used the same data analysis approach as in Experiment 1. First, the total number of safe choices was not significantly related to age (p = 0.104) but was significantly related to sex (p < .001), so we controlled for sex in subsequent analyses. As in Experiment 1, we first regressed the total number of safe choices on the dummy-coded treatment group (Control group as reference) controlling for sex, and the Risky Person group significantly predicted the number of safe choices (p = 0.008). In other words, individuals who received social information about one person who made risky choices throughout the lottery choice task tended to have a lower number of safe choices (M = 5.49, SD = 2.08) on average than individuals in the Control group (M = 6.24, SD = 1.98). This test refuted Hypothesis 1 and supported Hypothesis 2, but only for the Risky Person group.

Next, we again used multilevel modeling with Bliese and Ployhart’s (Reference Bliese and Ployhart2002) model-comparison strategy to account for the nested structure of the lottery questions within persons. As shown in Table 5, after establishing that a random intercept model (Model 1) and random slope model (Model 2) were appropriate with this data, we added treatment group as a predictor of intercept variation (Model 3) and as a moderator of slope variation to predict response tendencies across questions (Model 4). We found that respondents in the Risky Person group tended to answer differently on the first lottery question (Question Number 0) in Model 3 predicting intercept variation (b = − .071, p < .01), but none of the cross-level moderators were significant in Model 4.

Table 5 Multilevel modeling results: effects of social influence on safe response tendencies across questions in Experiment 2

Note:

* p < .1. **p < .05. ***p < .01. SRF = Small Risky Framing group, SC = Small Contrarian group, RP = Risky Person group, RPP = Risky Person with Payoff group, Int = Interaction. In each model, the outcome was participants’ answer choice to a given question, treating responses as nested within persons. Question Number was coded 0–9. Model 1 was a null random intercept model (random intercept model of choices across question numbers, without additional predictors of intercept variance), Model 2 added a random slope, Model 3 added predictors of intercept variation (i.e., variation in responses on the first question), and Model 4 added cross-level interaction effects to predict slope variation (i.e., variation in response tendencies across questions). All models were estimated using the lme function in R, which uses a linear mixed effects model with nested random errors that allows within-group errors to be correlated. Standard errors are in parentheses. The lme function calculates standard errors for random effects using the delta method (see Oehlert, Reference Oehlert1992).

To interpret these results more clearly, Fig. 3 shows the visual trajectory of safe choices by question for each group, and a more focused plot comparing the control group slope to the Risky Person slope is presented in Figure A2 in Appendix 4. These results show that participants in the Risky Person group tended to start out choosing riskier options right away (i.e., Risky Person group significantly predicted intercept variation as shown in Table 5) and continued to choose riskier choices across the 10 lottery choice questions. In other words, the steepness of the slope did not differ between the Control and Risky Person group, but rather the Risky Person group slope was below the Control group slope across the full 10 questions, demonstrating a clear effect of social influence and supporting Hypothesis 2 for this group.

Fig. 3 Slopes of safe choices by group in Experiment 2

As before, we also re-ran the multilevel models several times by changing the reference group to each treatment group to compare the groups to each other. Similar to the initial results, the Risky Person group also tended to respond significantly differently on the first question compared to the Small Contrarian group (b = .059, p < .05) and the Risky Person with Payoff group (b = .057, p < .05). None of the treatment groups had significantly different slopes. Because the Risky Person group only differed from the other risky framing groups, but none of the risky framing groups differed from the Small Risky Frame group, Hypothesis 3 about framing was not supported.

4.2.4. Experiment 2 discussion

The results of Experiment 2 show no difference in responses (relative to the Control group) when individuals are given information from a small sample or given information about the trajectory of choices of one successful individual in the past. However, a statistically significant difference in responses is found when participants are given information about one person consistently choosing the risky option across questions, without any description of payoff.

Together, Experiments 1 and 2 show that social influence information can sway individuals to respond differently toward risky decision-making in some situations, but not others. Importantly, the significant differences in responses in both Experiments 1 and 2 resulted from groups that were provided with information about the risky choices of others, indicating that learning about others’ risky behavior can reduce risk aversion. Nonetheless, it remains unclear why the risky framing about a large group of 100 (Experiment 1) and a single person (Experiment 2) elicited riskier response behavior, but the risky behavior of a small group of 10 did not change response patterns.

Given these mixed results, we conducted a third experiment designed to further tease apart the conditions under which social influence information matters. Further, both Experiments 1 and 2 were conducted with MTurk participants online. Because of the nature of this study design, the effects of social influence were likely reduced because participants were not present among other people in the study. Thus, to further enhance the social influence manipulation, Experiment 3 was conducted using an on-campus subject pool in a traditional laboratory environment.

Further, in both Experiments 1 and 2, the social comparison information generally presented ‘realistic’ proportions (except for the Small Contrarian Group in Experiment 2) of individuals choosing the risky versus safe answer choices in each question. By providing information about the real response tendencies of a majority of people, this information is less likely to elicit changes in participants’ response tendencies because they may have had similar response patterns as the majority anyway, regardless of social information. In this way, Experiments 1 and 2 provided relatively conservative tests of the effects of group-referenced social influence on risky choices. To build on these results, Experiment 3 also included a group with more ‘extreme’ social influence information in which very high proportions of people in a group chose the risky option throughout the lottery choice task.

4.3. Experiment 3

4.3.1. Study design

Experiment 3 followed the same general methodology as Experiments 1 and 2 with the notable exceptions that all participants were undergraduate students, and they completed all tasks in-person at a campus computer lab. As in the previous experiments, participants were randomly assigned to experimental conditions and then completed the same questionnaire, including: (1) the Holt and Laury (Reference Holt and Laury2002) lottery choice task; (2) the Mini-IPIP personality questionnaire; (3) the Iowa–Netherlands Comparison Orientation survey (INCOM); and (4) demographic information. The questionnaires were designed using Qualtrics and can be found in Appendix 2.

Experiment 3 was conducted across several sessions. In each session, participants were in groups of 2–20 participants, and all were present in-person at a lab on the campus of a large western university. Participants were randomly assigned to one of four groups: the Control Group, the Safe Frame group, the Risky Frame group, or the Risky Frame Extreme group. The randomization was done using Qualtrics randomization functionality. As in previous experiments, participants in the Control Group were given the questionnaire with no additional information; the Safe Frame group received information about the number of peers (i.e., students at the same university) in a prior group who chose the safe option for each question (Option A); the Risky Frame group received information about the number of peers in a prior group who chose the risky option for each question (Option B); and the Risky Frame Extreme group received information about the number of peers in a prior group who chose the risky option for each question (Option B), but the number of peers was more ‘extreme’ by using a larger reference group who chose Option B. Information about the answer choices selected by the past participants was based on the results obtained in a pre-test conducted with 312 undergraduate students. The exact wording given to each group for each question is presented in Table 6.

Table 6 Experiment 3 treatments by question

There were two noteworthy changes to the social influence stimuli presented to participants in Experiment 3 compared to previous experiments. First, depending on the specific condition, participants were shown the number of individuals who chose either Option A or Option B. Specifically, the social comparison statement read: ‘Out of 10 University students who took this survey in the past, # chose Option X,’ where X is replaced with A or B depending on the treatment group condition per Table 6. Unlike in the previous two experiments, the comparison group was directly framed as peers (students from the same university), which represents a clear example of lateral framing. Second, participants in the treatment groups were presented with stick figure graphics to illustrate the proportion of past participants’ choices (see Fig. 4). Graphical representation of statistical information was designed to further reinforce the proportion of the previous group that chose the risky choice for each question, rather than forcing participants to think about the proportions in numbers.

Fig. 4 Example of stick figure graphics in Experiment 3

4.3.2. Recruitment, sample characteristics, and payment

Participants were recruited via class advertisements. Participants were at least 18 years old and located in the United States. N = 284 students participated in this study. Participants were excluded if they did not complete all 10 lottery choice questions or failed two attention check questions. The final sample was N = 270 observations (41% female), with N = 71 in the control group, N = 68 in the Safe Frame group, N = 70 in the Risky Frame group, and N = 61 in the Risky Frame Extreme group. In the total sample, 85% of participants were aged 18 to 24; 7% were aged 25 to 34; and 3.5% were aged 35 or higher.

Participants were initially paid $5.00 as a show-up fee for participation and later a bonus fee of up to $3.85 depending upon lottery choices, as in Experiments 1 and 2. However, in contrast to using a random number generator, participants in Experiment 3 witnessed a ‘live’ drawing in the lab to determine the lottery choice outcomes. After all participants finished the survey task in their lab session, one subject in the room was asked to draw a number from a bag to determine which lottery choice question would be used for the bonus payment. A second subject was then asked to draw another number from a bag to determine the outcome of the lottery. This transparent in-person lottery outcome procedure was explained a priori and was designed to increase the salience of incentive payments (compared to the MTurk subjects who had to rely on a distant online experimenter). Participants received their bonus pay before leaving the laboratory.

4.3.3. Data analysis and results

We used the same data analysis strategy as in Experiments 1 and 2. The total number of safe choices was not related to age (p = 0.723), but the correlation between the total number of safe choices and sex was significant (p < .001), so we controlled for sex in subsequent analyses. First, we regressed the total number of safe choices on treatment groups (control group as reference) and sex, but none of the groups significantly predicted the total number of safe choices relative to the control group.

Next, we conducted model comparison using multilevel modeling (MLM) to test for differences in the trajectory of responses across groups. We first established that a random intercept model (Model 1) and random slope model (Model 2) were appropriate for this data. Then, we added treatment group as a predictor of intercept variation (Model 3) and added treatment group as a cross-level moderator to explain slope variation (Model 4).

The MLM results are shown in Table 7, and the response slopes by group are shown in Fig. 5. Although none of the treatment groups significantly differed from the control group in their overall number of safe choices in the OLS regression, we found that the trajectory of answer choices differed for both the Safe Frame group (Q*Safe b = .019, p < .05) and the Risky Frame group (Q*Risky b = .022, p < .05) relative to the control group. As shown in Fig. 5, both the Safe Frame group and the Risky Frame group chose riskier options earlier on than the control group. Intriguingly, both groups’ slopes also had a crossover with the control group, exhibiting a complex effect of social influence on the trajectory of responses across the 10 questions. Altogether, these results suggest that the Risky Extreme social influence manipulation did not change participant risk-taking compared to the control group, whereas both Safe Frame and Risky Frame did, partially refuting Hypothesis 1 and showing partial support for Hypothesis 2.

Note: In the Safe Frame and Risky Frame Extreme conditions, there were zero responses for Option A after question 9.

Fig. 5 Slopes of safe choices by group in Experiment 3

Table 7 Multilevel modeling results: Effects of social influence on safe response tendencies across questions in Experiment 3

Note:

* p < .1. **p < .05. ***p < .01. In each model, the outcome was participants’ answer choice in a given question, treating responses as nested within persons. Question Number was coded 0–9. Model 1 was a null random intercept model (random intercept model of choices across question numbers, without additional predictors of intercept variance), Model 2 added a random slope, Model 3 added predictors of intercept variation (i.e., variation in responses on the first question), and Model 4 added cross-level interaction effects to predict slope variation (i.e., variation in response tendencies across questions). All models were estimated using the lme function in R, which uses a linear mixed effects model with nested random errors that allows within-group errors to be correlated. Standard errors are in parentheses. The lme function calculates standard errors for random effects using the delta method (see Oehlert, Reference Oehlert1992).

For our tests of Hypothesis 3, we again compared the treatment group slopes to each other by changing the reference group in the MLM. These analyses showed that although the Risky Frame Extreme group did not significantly differ from the control group, the Extreme group did significantly differ from the Risky Frame group (QNum*Risky b = .017, p < .05). On the other hand, the Safe Frame group did not significantly differ from either Risky Frame or Risky Frame Extreme. Altogether, although the Risky Frame and Risky Frame Extreme groups had significantly different slopes from each other, both were framed in terms of the risky option; further, the more ‘extreme’ risky proportions did not elicit more risky behavior from the participants. Thus, Hypothesis 3 was not supported.

Finally, as a robustness check, we also conducted regressions using a random effects logit model and found similar results that further support our conclusions from the multilevel models (see Appendix 4, Table A8). These effects are illustrated in Fig. 6. The marginal effects show that the Control group was significantly more likely to choose Option A compared to the Risky Frame group for questions 1 to 3 and less likely to choose Option A for question 10. Next, subjects in Risky Frame Extreme were more likely to choose Option A in questions 1 to 5 compared to Risky Frame. When comparing Safe Frame to Risky Frame, there is no significant difference in marginal effects for the 10 questions. Again, this is not consistent with Hypothesis 3.

Note: Marginal effects estimated using random effects logit (see Appendix Table A8)

Fig. 6 Average marginal effects of treatments relative to the frame b treatment in choosing option a by question number

4.3.4. Experiment 3 discussion

The results from Experiment 3 show that when participants are given social influence information about the number of peers (other students) who performed the lottery task in the past, there is a tendency to choose the riskier option in earlier questions, but some are less likely to do so in later questions relative to a control group. Additionally, giving more ‘extreme’ samples of peers choosing the risky option did not lead to significant changes in risk preferences. The null results from the Risky Frame Extreme group suggest that perhaps the extreme social information deviated too far from norms, and thus may have been ignored. There may be an upper limit to the degree of social influence that will actually elicit changes in behavior. On the other hand, another possible explanation is that a fraction of people may exhibit anti-conformist behavior (Eck & Gebauer, Reference Eck and Gebauer2022) and choose to diverge from the behavior exhibited by the majority of the reference group, particularly when the majority is more extreme. Given several potential explanations, this phenomenon warrants further investigation.

Nonetheless, Experiment 3 further strengthened prior conclusions refuting Hypothesis 1 and supporting Hypothesis 2. Specifically, once again in this experiment, social influence information affected participants’ own levels of risk aversion and tended to increase risky behavior, particularly early on when the risk was the greatest.

5. General discussion

Decision-making and risk-taking behavior in the context of social interaction has been a topic of research in behavioral finance for decades. To assess factors that may affect risky behavior, we designed a series of experiments to quantify the effects of social influence on decision-making. The theoretical model proposed in this study is a contribution to the literature on changing investors’ risk aversion in the presence of social interactions. Prior literature focused on studying loss aversion and social comparison models within the prospect theory framework. However, there has been little attention paid to quantifying the effects of social influence on decision-making. The present study begins to fill this gap in the literature by examining and quantifying the relationship between the number of influencers and the percentage of peer individuals who made risky choices in the past, thus altering individuals’ current levels of risk aversion. Specifically, we find support for norms theory (Hypothesis 2), rather than expected utility theory (Hypothesis 1).

Comparing the proportion of safe choices under different social influence conditions across the three experiments, several findings are noteworthy. First, in all three experiments, we find that framing the social influence prompts as the number of others making ‘risky’ choices can lead to more risky choices, whether framed in terms of a proportion of people from a large prior group (Experiment 1), a single consistently risky person (Experiment 2), or a proportion from a smaller peer group (Experiment 3). One straightforward interpretation of this finding is that people are more comfortable taking risks when others are also taking such risks, a finding consistent with market bubbles (Kindleberger, Reference Kindleberger1978). The consistency of significant findings across experiments demonstrates a clear takeaway that social influence from others can affect personal risk-taking.

Another notable takeaway is the framing effects demonstrated in Experiments 1 and 2. In particular, we find that social influence information affects behavior differently depending on whether the description is about people who made ‘safe’ versus ‘risky’ choices in the past. In the present work, social influence information about prior safe choices typically did not elicit differences in behavior relative to the control group (with the exception of social influence information about peers when provided in the physical presence of such peers in Experiment 3). On the other hand, risky framing of social information typically elicited more risky behavior from participants relative to both the control group (Hypothesis 2) and relative to the safe framing group (Hypothesis 3).

The framing effects are notable for understanding how people think about social information and potential norms. It is an open question how norm recognition occurs and what norm is selected (Bicchieri, Reference Bicchieri2005). In Experiment 3, for example, there is evidence that individuals chose the risky choice (Option B) right away in Question 1 after learning that 30% of a prior group of peers chose Option B. One interpretation is that this information tells the subject what a person ought to do (i.e., what is normative). It can also be that knowing that some people choose the risky action can change whether that norm is permissible. That is, people may not consider the risky option in Question 1 as a possible norm but after observing some people choosing the risky option, they then believe that the risky choice is a possible norm, with the other norm being the safe choice. Prior research on fairness norms has found evidence that when faced with conflicting norms, people will often choose the norm that is in their own best interest (Bediou et al., Reference Bediou, Sacharin and Hill2012; Hennig-Schmidt et al., Reference Hennig-Schmidt, Irlenbusch and Rilke2018; Ridinger, Reference Ridinger2018; Rodriguez-Lara, Reference Rodriguez-Lara2016). In other words, one explanation for the early-switching is that learning that some people also switched early allows for the risky choice to be a possible norm, and some people select that norm when making their decisions.

Another interpretation is that the social information changes the strength of adhering to the norm of going with the safe choice. Suppose that in early questions, subjects believe the norm is to choose the safe option. If we assume some subjects are risk-seeking and also have preferences to follow norms, then theoretically we can also account for why individuals would switch after learning that 3 out of 10 previous peers chose the risky option. If a person is more risk-seeking but chooses the safe option because they think the safe option is the norm, then learning that some fraction of people do not choose the norm (i.e., the safe option) may decrease the negative impact on their utility if they also deviate from their norm.

Additionally, the change in the proportion of safe choices across groups in the middle of the lottery task (around Q4–Q6) is interesting and worthy of further study, particularly given the complexity of the slope crossovers in Experiment 3. Specifically, individuals who received social information with risky framing tended to take more risks in early questions, but those who didn’t were less likely to switch in later questions. In the theory presented in the paper, each individual can vary in how they are affected by social information. The theoretical model allows individuals to have different values of ki. Empirical research in other domains suggests that many people may have a ki > 0, which predicts that their utility is negatively affected by not adhering to the norm (Kimbrough & Vostroknutov, Reference Kimbrough and Vostroknutov2016). That is, they have a partial degree of conformity to norms. However, prior work has found that a fraction of subjects have what appear to be anti-conformist preferences (McBride & Ridinger, Reference McBride and Ridinger2021). In this case, learning that more people are following the norm may actually lead them to select the opposite choice than what the majority are doing. The theoretical model allows for this. In that case, the person receives an increase in utility if they do not conform to the norm.

The practical implications of our findings suggest that we can understand some aspects of the behavior of individuals in risk-taking situations simply by looking at the behavior of other market participants. Individuals’ investment performance may be improved by educating them about the effects of social norms and framing on their decision-making processes (Hypotheses 2 and 3). If it is beneficial to reduce risk aversion, then our results show that one potential intervention is to let people know that other people have taken risks in the past, which can then lead people to take more risks themselves.

5.1. Limitations and future directions

Our study had certain limitations. First, experiments 1 and 2 were conducted with Amazon MTurk workers who self-selected to participate in the experiment. Although the subjects were randomly assigned to experimental groups, the sample itself was not randomly selected, which could limit the external validity of the experimental results since participants may not represent the full population of decision-makers in financial or other markets. Additionally, Experiment 1 was conducted using MTurk respondents who were not ‘master workers,’ which resulted in a significant reduction of the sample size due to failure of the respondents to pass attention checks embedded in the survey. Nonetheless, the included participants provided reliable data.

Our experiments have shown that individuals’ decisions can be influenced by the way the information is presented to them. We explored some simple manipulations in framing such as the number of influencers, risky versus safe choices, and unknown referents versus peer influencers. Nonetheless, there is a multitude of different possible framings of social information, so future research should creatively explore alternative framings in order to partial out the marginal effects of social influence on risk-taking.

Further, although our results are informative for norm theory and understanding how social influence can affect risk-taking in decisions, we did not assess participants’ prior expectations or norms. One possibility is that people who are more risk averse may hold the safe choice as the norm, but if they are presented information about people making risky choices, they may then become less risk averse themselves; on the contrary, individuals who already believe that risk-taking is the norm may be surprised if they read about less people making the risky choice than they would expect, in which case they may become more risk averse as a result. It would be intriguing to directly assess people’s norms and compare their subsequent behavior depending on whether social information aligns with, or deviates from, their individual expectations.

Finally, all social influence information in our experiments was provided in text-based or visual (stick-figure) format within the surveys. Future research could directly expose participants to situations in which peers, leaders, or other social figures make safe versus risky decisions to test whether effects may be strengthened by directly witnessing the choices and behavior of others. Likewise, beyond lab experiments, the impact of social influence on risk-taking should also be examined in field studies examining such impacts in real-world contexts. Ultimately, we would like to know how our own risk-taking behavior will change when we do ‘see a friend getting rich.’

6. Conclusion

What would Amnon Rapoport think about the results of our experiments? Amnon generally promoted and utilized rational decision-making. With regard to Holt and Laury’s (Reference Holt and Laury2002) lottery choices, Amnon would have likely chosen the option with the higher expected value. But would he have been swayed by the social influence of others making safer or riskier choices? A notable trip to Las Vegas may provide some insight.

Sometime back in the mid 1990’s, two of Amnon’s favoriteFootnote 5 former graduate students (Darryl Seale and Jim Sundali) arranged a weekend trip to Las Vegas. This trip was strictly a short vacation getaway, with no conference to attend and all wives (Maya, Karen, and Nancy) in attendance. After spending a day lounging at the Luxor pool and enjoying a nice dinner, the principals ended up at a Blackjack table in a crowded casino. Amnon observed as Jim handed over some cash for a stack of chips and began placing bets of one or two chips on each hand. Amnon and Jim were both aware of basic strategy in Blackjack, and Amnon might even have known how to count cards. Meanwhile Nancy, Jim’s wife, was playing Blackjack at another table because she didn’t want her lucky mojo to be affected by the rationalists.

Jim’s hope was for Amnon to sit down next to him at the Blackjack table so the master and the student could enjoy some real-world gambling to complement the hours of game theory lectures Amnon had delivered in the classroom. Yet Amnon chose to simply stand and observe the gamblers at the table and take in the milieu of a Las Vegas casino on a crowded Saturday evening. Amnon might have been observing and assessing the behavior of the players at the table, like he did with the thousands of subjects in his many experiments. Then, without warning, Amnon placed a $20 bill on the table. Jim asked Amnon if he wanted some chips, but he said no: he was placing a single bet. The cards were dealt, Amnon lost, and his $20 was collected by the dealer. When asked why he did what he did, Amnon replied that a single $20 bet had a higher expected value than a series of $2 bets. Thus, even on Saturday night among hundreds of gamblers on a crowded casino floor in the entertainment capital of the world, Amnon was not swayed by social influence and played rationally.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/eec.2025.10020.

Replication packages

The replication material for the study is available at https://doi.org/10.17605/OSF.IO/FH254.

Footnotes

On behalf of all the authors, the corresponding author states that there are no conflicts of interest.

1 Amnon’s interest in financial decision-making was more theoretical than practical, as he once asked one of his naïve grad students (Jim Sundali) to manage his portfolio. The request was politely declined.

2 See Ridinger and McBride (Reference Ridinger and McBride2020) for a recent survey on social preference models and research in strategic games.

3 For example, McBride and Ridinger (Reference McBride and Ridinger2021) conduct a rule-following experiment with social influence and find that the strategy method is associated with larger effect compared to the direct elicitation method. While it is difficult to determine if this is due to the experimental demand effect, it illustrates that the elicitation method may be important to vary in studying how individuals are influenced by others.

4 We note that a significant portion of subjects chose option A for question 10 which is dominated by option B. Our analysis includes all data for subjects who passed the attention checks. As a robustness check, we re-ran the analysis dropping the subjects who chose option A for question 10. Using this restricted data, both the safe frame and risky frame treatments are more likely overall to choose option B compared to the control. There is one difference, such that with the reduced sample size, the interaction between the risky frame treatment and the question number are no longer significant. These results are available upon request.

5 There may be a bit of a self-serving bias in this characterization.

References

Aliber, R. Z., and Kindleberger, C. P. (2015). Manias, panics, and crashes: A history of financial crises. (6th ed., p. 256). Palgrave Macmillan.10.1007/978-1-137-52574-1CrossRefGoogle Scholar
Anderson, L. R., & Mellor, J. M. (2008). Predicting health behaviors with an experimental measure of risk preference. Journal of Health Economics, 27(5), 12601274.10.1016/j.jhealeco.2008.05.011CrossRefGoogle ScholarPubMed
Bauermeister, G.-F., Hermann, D., & Musshoff, O. (2018). Consistency of determined risk attitudes and probability weightings across different elicitation methods. Theory and Decision, 84(4), 627644.10.1007/s11238-017-9616-xCrossRefGoogle Scholar
Bault, N., Coricelli, G., & Rustichini, A. (2008). Interdependent utilities: How social ranking affects choice behavior. PLoS ONE, 3(10), e3477.10.1371/journal.pone.0003477CrossRefGoogle ScholarPubMed
Bediou, B., Sacharin, V., & Hill, C. (2012). Sharing the fruit of labor: Flexible application of justice principles in an ultimatum game with joint-production. Social Justice Research, 25(1), 2540.10.1007/s11211-012-0151-1CrossRefGoogle Scholar
Benzion, U., Rapoport, A., & Yagil, J. (1989). Discount rates inferred from decisions: An experimental study. Management Science, 35(3), 270284.10.1287/mnsc.35.3.270CrossRefGoogle Scholar
Bicchieri, C. (2005). The grammar of society: The nature and dynamics of social norms. Cambridge University Press.10.1017/CBO9780511616037CrossRefGoogle Scholar
Bliese, P. D., & Ployhart, R. E. (2002). Growth modeling using random coefficient models: Model building, testing, and illustrations. Organizational Research Methods, 5(4), 362387.10.1177/109442802237116CrossRefGoogle Scholar
Bolton, G. E., & Ockenfels, A. (2000). ERC: A theory of equity, reciprocity, and competition. American Economic Review, 91(1), 166193.10.1257/aer.90.1.166CrossRefGoogle Scholar
Bruhin, A., Fehr‐Duda, H., & Epper, T. (2010). Risk and rationality: Uncovering heterogeneity in probability distortion. Econometrica, 78(4), 13751412.Google Scholar
Chang, D., Chen, R., & Krupka, E. (2019). Rhetoric matters: A social norms explanation for the anomaly of framing. Games and Economic Behavior, 116, 158178.10.1016/j.geb.2019.04.011CrossRefGoogle Scholar
Charness, G., & Gneezy, U. (2012). Strong evidence for gender differences in risk taking. Journal of Economic Behavior & Organization, 83(1), 5058.10.1016/j.jebo.2011.06.007CrossRefGoogle Scholar
Cialdini, R. B., Reno, R. R., & Kallgren, C. A. (1990). A focus theory of normative conduct: Recycling the concept of norms to reduce littering in public places. Journal of Personality and Social Psychology, 58(6), 1015.10.1037/0022-3514.58.6.1015CrossRefGoogle Scholar
Daniel, W. (2021). WallStreetBets traders are pushing risky stocks to all-time highs. Here are 10 quotes from the forum that help explain the phenomenon. Business Insider, Jan 31 , 2021. https://markets.businessinsider.com/news/stocks/wallstreetbets-traders-equities-all-time-highs-quotes-forum-explain-phenomenon-2021-1-1030022946Google Scholar
Dohmen, T., Falk, A., & Huffman, D. (2011). Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association, 9(3), 522550.10.1111/j.1542-4774.2011.01015.xCrossRefGoogle Scholar
Dulleck, U., Fooken, J., & Fell, J. (2015). Within-subject intra-and inter-method consistency of two experimental risk attitude elicitation methods. German Economic Review, 16(1), 104121.10.1111/geer.12043CrossRefGoogle Scholar
Eck, J., & Gebauer, J. E. (2022). A sociocultural norm perspective on Big Five prediction. Journal of Personality and Social Psychology, 122(3), 554.10.1037/pspp0000387CrossRefGoogle ScholarPubMed
Eckel, C. C., & Grossman, P. J. (2008). Men, women and risk aversion: Experimental evidence. In C. Plott, & V. Smith (Eds.), Handbook of Experimental Economics Results (Vol. 1, pp. 10611073). Elsevier.10.1016/S1574-0722(07)00113-8CrossRefGoogle Scholar
Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics, 114(3), 817868.10.1162/003355399556151CrossRefGoogle Scholar
Gantner, A., & Kerschbamer, R. (2018). Social interaction effects: The impact of distributional preferences on risky choices. Journal of Risk and Uncertainty, 56(2), 141164.10.1007/s11166-018-9275-5CrossRefGoogle ScholarPubMed
Gibbons, F. X., & Buunk, B. P. (1999). Individual differences in social comparison: Development of a scale of social comparison orientation. Journal of Personality and Social Psychology, 76(1), 129.10.1037/0022-3514.76.1.129CrossRefGoogle ScholarPubMed
Goldberg, L. R. (1992). The development of markers for the Big-Five factor structure. Psychological Assessment, 4(1), 26.10.1037/1040-3590.4.1.26CrossRefGoogle Scholar
Hennig-Schmidt, H., Irlenbusch, B., & Rilke, R. M. (2018). Asymmetric outside options in ultimatum bargaining: A systematic analysis. International Journal of Game Theory, 47(1), 301329.10.1007/s00182-017-0588-4CrossRefGoogle Scholar
Hogarth, R. M. (1981). Beyond discrete biases: Functional and dysfunctional aspects of judgmental heuristics. Psychological Bulletin, 90(2), 197.10.1037/0033-2909.90.2.197CrossRefGoogle Scholar
Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. American Economic Review, 92(5), 16441655.10.1257/000282802762024700CrossRefGoogle Scholar
Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 13381339.10.1126/science.1091721CrossRefGoogle ScholarPubMed
Kahneman, D., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.10.1017/CBO9780511809477CrossRefGoogle Scholar
Kahneman, D., & Tversky, A. (1979). Prospect Theory: An analysis of decision under risk. Econometrica: Journal of the Econometric Society, 47(2), 263291.10.2307/1914185CrossRefGoogle Scholar
Kimbrough, E. O., & Vostroknutov, A. (2016). Norms make preferences social. Journal of the European Economic Association, 14(3), 608638.10.1111/jeea.12152CrossRefGoogle Scholar
Kindleberger, C. P. (1978). Manias, panics, and rationality. Eastern Economic Journal, 4(2), 103112.Google Scholar
Krijnen, J. M., Tannenbaum, D., & Fox, C. R. (2017). Choice architecture 2.0: Behavioral policy as an implicit social interaction. Behavioral Science & Policy, 3(2), 118.10.1177/237946151700300202CrossRefGoogle Scholar
Kroll, Y., Levy, H., & Rapoport, A. (1988a). Experimental tests of the separation theorem and the capital asset pricing model. The American Economic Review, 78, 500519.Google Scholar
Kroll, Y., Levy, H., & Rapoport, A. (1988b). Experimental tests of the mean-variance model for portfolio selection. Organizational Behavior and Human Decision Processes, 42(3), 388410.10.1016/0749-5978(88)90007-6CrossRefGoogle Scholar
Kühberger, A. (1998). The influence of framing on risky decisions: A meta-analysis. Organizational Behavior and Human Decision Processes, 75(1), 2355.10.1006/obhd.1998.2781CrossRefGoogle ScholarPubMed
Lahno, A. M., & Serra-Garcia, M. (2015). Peer effects in risk taking: Envy or conformity? Journal of Risk and Uncertainty, 50(1), 7395.10.1007/s11166-015-9209-4CrossRefGoogle Scholar
Lieberman, A., Duke, K. E., & Amir, O. (2019). How incentive framing can harness the power of social norms. Organizational Behavior and Human Decision Processes, 151, 118131.10.1016/j.obhdp.2018.12.001CrossRefGoogle Scholar
Linde, J., & Sonnemans, J. (2012). Social comparison and risky choices. Journal of Risk and Uncertainty, 44(1), 4572.10.1007/s11166-011-9135-zCrossRefGoogle Scholar
Lindskog, A., Martinsson, P., & Medhin, H. (2022). Risk-taking and others: Does the social reference point matter? Journal of Risk and Uncertainty, 64(3), 287307.10.1007/s11166-022-09376-xCrossRefGoogle Scholar
Lönnqvist, J. E., Verkasalo, M., Walkowitz, G. (2015). Measuring individual risk attitudes in the lab: Task or ask? An empirical comparison. Journal of Economic Behavior & Organization, 119, 254266.10.1016/j.jebo.2015.08.003CrossRefGoogle Scholar
McBride, M., & Ridinger, G. (2021). Beliefs also make social-norm preferences social. Journal of Economic Behavior & Organization, 191, 765784.10.1016/j.jebo.2021.09.030CrossRefGoogle Scholar
Oehlert, G. W. (1992). A note on the delta method. The American Statistician, 46(1), 2729.10.1080/00031305.1992.10475842CrossRefGoogle Scholar
Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior & Organization, 3(4), 323343.10.1016/0167-2681(82)90008-7CrossRefGoogle Scholar
Quiggin, J. (1985). Subjective utility, anticipated utility, and the Allais paradox. Organizational Behavior and Human Decision Processes, 35(1), 94101.10.1016/0749-5978(85)90046-9CrossRefGoogle Scholar
Rapoport, A. (1984). Effects of wealth on portfolios under various investment conditions. Acta Psychologica, 55(1), 3151.10.1016/0001-6918(84)90058-1CrossRefGoogle Scholar
Rapoport, A., Zwick, R., & Funk, S. G. (1988). Selection of portfolios with risky and riskless assets: Experimental tests of two expected utility models. Journal of Economic Psychology, 9(2), 169194.10.1016/0167-4870(88)90050-5CrossRefGoogle Scholar
Ridinger, G., & McBride, M. (2020). Reciprocity in games with unknown types. Handbook of experimental game theory, p.271.Google Scholar
Ridinger, G. (2018). Ownership, punishment, and norms in a real-effort bargaining experiment. Journal of Economic Behavior & Organization, 155, 382402.10.1016/j.jebo.2018.09.008CrossRefGoogle Scholar
Rodriguez-Lara, I. (2016). Equity and bargaining power in ultimatum games. Journal of Economic Behavior & Organization, 130, 144165.10.1016/j.jebo.2016.07.007CrossRefGoogle Scholar
Schmidt, U., Friedl, A., Eichenseer, M. (2021). Social comparison and gender differences in financial risk taking. Journal of Economic Behavior & Organization, 192, 5872.10.1016/j.jebo.2021.09.014CrossRefGoogle Scholar
Schoemaker, P. J. (1990). Are risk-attitudes related across domains and response modes?. Management Science, 36(12), 14511463.10.1287/mnsc.36.12.1451CrossRefGoogle Scholar
Schwerter, F. (2024). Social reference points and risk taking. Management Science, 70(1), 616632.10.1287/mnsc.2023.4698CrossRefGoogle Scholar
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1977). Behavioral decision theory. Annual review of psychology.10.1146/annurev.ps.28.020177.000245CrossRefGoogle Scholar
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453458.10.1126/science.7455683CrossRefGoogle ScholarPubMed
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5(4), 297323.10.1007/BF00122574CrossRefGoogle Scholar
Figure 0

Table 1 The 10 paired lottery choice decisions with low payoffs, Holt and Laury (2002)

Figure 1

Table 2 Experiment 1 social influence information by question

Figure 2

Fig. 1 Slopes of safe choices by group in Experiment 1

Figure 3

Table 3 Multilevel modeling results: Effects of social influence on safe response tendencies across questions in Experiment 1

Figure 4

Fig. 2 Average marginal effects of control and risky frame treatments relative to the safe frame treatment in choosing option a by question number

Note: Marginal effects estimated using random effects logit (see Appendix Table A7)
Figure 5

Table 4 Experiment 2 treatments by question

Figure 6

Table 5 Multilevel modeling results: effects of social influence on safe response tendencies across questions in Experiment 2

Figure 7

Fig. 3 Slopes of safe choices by group in Experiment 2

Figure 8

Table 6 Experiment 3 treatments by question

Figure 9

Fig. 4 Example of stick figure graphics in Experiment 3

Figure 10

Fig. 5 Slopes of safe choices by group in Experiment 3

Note: In the Safe Frame and Risky Frame Extreme conditions, there were zero responses for Option A after question 9.
Figure 11

Table 7 Multilevel modeling results: Effects of social influence on safe response tendencies across questions in Experiment 3

Figure 12

Fig. 6 Average marginal effects of treatments relative to the frame b treatment in choosing option a by question number

Note: Marginal effects estimated using random effects logit (see Appendix Table A8)
Supplementary material: File

Karapetian et al. supplementary material 1

Karapetian et al. supplementary material
Download Karapetian et al. supplementary material 1(File)
File 166.7 KB
Supplementary material: File

Karapetian et al. supplementary material 2

Karapetian et al. supplementary material
Download Karapetian et al. supplementary material 2(File)
File 598.8 KB