To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Wrong-doers may try to collaborate to achieve greater gains than would be possible alone. Yet potential collaborators face two issues: they need to accurately identify other cheaters and trust that their collaborators do not betray them when the opportunity arises. These concerns may be in tension, since the people who are genuine cheaters could also be the likeliest to be untrustworthy. We formalise this interaction in the ‘villain’s dilemma’ and use it in a laboratory experiment to study three questions: what kind of information helps people to overcome the villain’s dilemma? Does the villain’s dilemma promote or hamper cheating relative to individual settings? Who participates in the villain’s dilemma and who is a trustworthy collaborative cheater? We find that information has important consequences for behaviour in the villain’s dilemma. Public information about actions is important for supporting collaborative dishonesty, while more limited sources of information lead to back-stabbing and poor collaboration. We also find that the level of information, role of the decision maker, and round of the experiment affect whether dishonesty is higher or lower in the villain’s dilemma than in our individual honesty settings. Finally, individual factors are generally unrelated to collaborating but individual dishonesty predicts untrustworthiness as a collaborator.
Trust is essential for effective collaboration. In advice settings, decision-makers’ trust in their advisors determines their willingness to follow advice. We propose that trust in the opposite direction, that is, the trust of the advisor in the decision-maker, can affect the use of advice. Specifically, we suggest that advice-taking is greater after a show of trust by the advisor than after an instance of distrust. We conducted four behavioral experiments using the trust game and judge–advisor system paradigms and one scenario study using a sample of currently employed professionals (N = 1599). We find that initial displays of trust by advisors result in greater acceptance of their advice (Studies 1A-B). This effect persists across different levels of advice quality, resulting in smaller underutilization of high-quality advice but also in overreliance on low-quality advice (Study 2). Decision-makers not only show greater willingness to follow advisors who trust them but also respond similarly to advisors who display trust in other people (Study 3). Finally, we find evidence for both perceived advisor competence and decision-makers’ motivation to reciprocate as mediators of the relation between advisors’ level of trust and decision-makers’ willingness to follow their advice (Study 4). Our findings shed light on the dynamics of trust and persuasion in advice relationships and provide insight for advisors who wish to maintain the effectiveness of their input.
Simulating a real world environment is of utmost importance for achieving accurate and meaningful results in experimental economics. Offering monetary incentives is a common method of creating this environment. In general, experimenters provide the rewards at the time of experiment. In this paper, we argue that receiving the reward at the time of the experiment may lead participants to make decisions as if the money they are using were not their own. To solve this problem, we devised a “prepaid mechanism” that encourages participants to use the money as if it were their own.
Two rationales have emerged for why individuals keep their promises: (a) an emotional commitment to keep actions and words consistent, a commitment rationale and (b) avoidance of guilt due to not meeting the expectations of the promisee, an expectations rationale. We propose a new dichotomy with clearer distinctions between rationales: (1) an internal consistency rationale, which is the desire to keep actions and words consistent regardless of others’ awareness of the promise and (2) a communication rationale, which captures all aspects of promise keeping that are associated with the promisee having learned of the promise, including but not limited to promisee expectations. Using an experiment that manipulates whether promises are delivered, we find no support for the internal consistency rationale; only delivered promises are relevant. In a second experiment designed to better understand what aspect of promise delivery influences promisor behavior, we manipulate whether the promise is delivered before or after the promisee is able to take a trusting action. We find late-arriving promises are relevant though not as relevant as promises delivered before the promisee chooses whether to take the trusting action. We conclude that implicit contracting does not fully explain promise keeping, because had it done so, late-arriving promises would also be irrelevant.
In this paper we use experimental data from rural Cameroon to quantify the effect of social distance on trust and altruism. Our measure of social distance is relevant to everyday economic interactions: subjects in a Trust Game play with fellow villagers or with someone from a different village. We find that significantly more money is sent when the players are from the same village. Other factors that influence transfers at least as much as the same-village effect are gender, education and membership of rotating credit groups. To test whether Senders are motivated by altruism, they also play a Triple Dictator Game. Senders transfer significantly more money on average in the Trust Game than in the Triple Dictator Game. However, there is also a social distance effect in the Triple Dictator Game. Results from a Risk Game suggest that Trust Game transfers are uncorrelated with attitudes to risk.
We experimentally investigate to what extent people trust and honor trust when they are playing with other people’s money (OPM). We adopt the well-known trust game by Berg et al. (in Games Econ. Behav. 10:122–142, 1995), with the difference that the trustor (sender) who sends money to the trustee (receiver) does this on behalf of a third party. We find that senders who make decisions on behalf of others do not behave significantly different from senders in our baseline trust game who manage their own money. But receivers return significantly less money when senders send a third party’s money. As a result, trust is only profitable in the baseline trust game, but not in the OPM treatment. The treatment effect among the receivers is gender specific. Women return significantly less money in OPM than in baseline, while there is no such treatment effect among men. Moreover, women return significantly less than men in the OPM treatment.
We investigate the role of intentions in two-player two-stage games. For this purpose we systematically vary the set of opportunity sets the first mover can choose from and study how the second mover reacts not only to opportunities of gains but also of losses created by the choice of the first mover. We find that the possibility of gains for the second mover (generosity) and the risk of losses for the first mover (vulnerability) are important drivers for second mover behavior. On the other hand, efficiency concerns and an aversion against violating trust seem to be far less important motivations. We also find that second movers compare the actual choice of the first mover and the alternative choices that would have been available to him to allocations that involve equal material payoffs.
We use time, rather than money, as the salient component of subjects’ incentives in three workhorse experimental paradigms. The use of waiting time can be interpreted as a special type of real effort condition, in which it is particularly straightforward to achieve experimental control over incentives. The three experiments, commonly employed to study social preferences, are the dictator game, the ultimatum game and the trust game. All subjects in a session earn the same participation fee, but their choices affect the time at which they are permitted to leave the laboratory. Decisions that are associated with greater own payoff translate into the right to depart earlier. The modal proposal in both the dictator and ultimatum games is an equal split of the waiting time. In the trust game, there is substantial trust and reciprocity. Overall, social preferences are evident in time allocation decisions. We compare subjects’ decisions over time and money and find no significant differences in average decisions. The pattern of results suggests that results obtained in the laboratory with money as the medium of reward generalize to other reward media.
We analyze reciprocal behavior when moral wiggle room exists. Dana et al. (Econ Theory 33(1):67–80, 2007) show that giving in a dictator game is inconsistent with distributional preferences as the giving rate drops when situational excuses for selfish behavior are provided. Our binary trust game closely follows their design. Only a preceding stage (safe outside option vs. enter the game) is added in order to introduce reciprocity. We find significantly lower rates of selfish choices in the trust baseline in comparison to our treatments that feature moral wiggle room manipulations and a dictator baseline. It seems that reciprocal behavior is not only due to people liking to reciprocate but also because they feel obliged to do so.
We report the results of experiments conducted over the internet between two different laboratories. Each subject at one site is matched with a subject at another site in a trust game experiment. We investigate whether subjects believe they are really matched with another person, and suggest a methodology for ensuring that subjects’ beliefs are accurate. Results show that skepticism can lead to misleading results. If subjects do not believe they are matched with a real person, they trust too much: i.e., they trust the experimenter rather than their partner.
We analyze whether subjects with extensive laboratory experience and first-time participants, who voluntarily registered for the experiment, differ in their behavior. Subjects play four one-shot, two-player games: a trust game, a beauty contest, an ultimatum game, a traveler’s dilemma and, in addition, we conduct a single-player lying task and elicit risk preferences. We find few significant differences. In the trust game, experienced subjects are less trustworthy and they also trust less. Furthermore, experienced subjects submit fewer non-monotonic strategies in the risk elicitation task. We find no differences whatsoever in the other decisions. Nevertheless, the minor differences observed between experienced and inexperienced subjects may be relevant because we document a potential recruitment bias: the share of inexperienced subjects may be lower in the early recruitment waves.
Betrayal aversion has been operationalized as the evidence that subjects demand a higher risk premium to take social risks compared to natural risks. This evidence has been first shown by Bohnet and Zeckhauser (J Econ Behav 98:294–310, 2004) using an adaptation of the Becker–DeGroot–Marschak mechanism (BDM, Becker et al. Behav Sci 9:226–232, 1964). We compare their implementation of the BDM mechanism with a new version designed to facilitate subjects’ comprehension. We find that, although the two versions produce different distributions of values, the size of betrayal aversion, measured as an average treatment difference between social and natural risk settings, is not different across the two versions. We further show that our implementation is preferable to use in practice as it reduces substantially subjects’ mistakes and the likelihood of noisy valuations.
Extensive empirical evidence demonstrates that humans possess other-regarding preferences, that is, they care about the well-being of others, in addition to their own. Humans are predisposed to cooperate with each other, but they are conditional cooperators; they respond to kindness with kindness and unkindness with unkindness. The intentionality of actions by others is important in judging the unkindness of their actions. This chapter explores the evidence on human sociality, using several experimental games such as: the ultimatum and dictator games, the trust and the gift exchange games, and the public goods game with and without punishments. The external validity of lab experiments is also considered. Other topics include the evolutionary origins of preferences and the behavioral differences between WEIRD and non-WEIRD societies. We consider the role of human morality and the aversion to lying and breaking promises. Even when lies cannot be discovered, a significant fraction of people chooses to remain honest, while others tell partial lies and only a few lie maximally. We also outline social identity theory, whereby humans treat ingroup members more favorably relative to outgroup members.
Do unbiased third-party peacekeepers build trust between groups in the aftermath of conflict? Theoretically, we point out that unbiased peacekeepers are the most effective at promoting trust. To isolate the causal effect of bias on trust, we use an iterated trust game in a laboratory setting. Groups that previously engaged in conflict are put into a setting in which they choose to trust or reciprocate any trust. Our findings suggest that biased monitors impede trust while unbiased monitors promote cooperative exchanges over time. The findings contribute to the peacekeeping literature by highlighting impartiality as an important condition under which peacekeepers build trust post-conflict.
Economic games offer a convenient approach for the study of prosocial behavior. As an advantage, they allow for straightforward implementation of different techniques to reduce socially desirable responding. We investigated the effectiveness of the most prominent of these techniques, namely providing behavior-contingent incentives and maximizing anonymity in three versions of the Trust Game: (i) a hypothetical version without monetary incentives and with a typical level of anonymity, (ii) an incentivized version with monetary incentives and the same (typical) level of anonymity, and (iii) an indirect questioning version without incentives but with a maximum level of anonymity, rendering responses inconclusive due to adding random noise via the Randomized Response Technique. Results from a large (N = 1,267) and heterogeneous sample showed comparable levels of trust for the hypothetical and incentivized versions using direct questioning. However, levels of trust decreased when maximizing the inconclusiveness of responses through indirect questioning. This implies that levels of trust might be particularly sensitive to changes in individuals’ anonymity but not necessarily to monetary incentives.
Ethnographers have recorded many instances of tokens donated as gifts to attract new partners or strengthen ties to existing ones. We study whether gifts are an effective pledge of the donor’s trustworthiness through an experiment modeled on the trust game. We vary whether the trustee can send a token before the trustor decides whether to transfer money; whether one of the tokens is rendered salient through experimental manipulations (a vote or an incentive-compatible rule of purchase for the tokens); and whether the subjects interact repeatedly or are randomly re-matched in each round. Tokens are frequently sent in all studies in which tokens are available, but repeated interaction, rather than gifts, is the leading behavioral driver in our data. In the studies with random pairs, trustors send significantly more points when the trustee has sent a token. Subjects in a fixed matching achieve comparable levels of trust and trustworthiness in the studies with and without tokens. The trustee’s decision to send a token is not predictive of the amount the trustee returns to the trustor. A token is used more sparingly whenever salient — a novel instance of endogenous value creation in the lab.
This chapter explores the role of trust in facilitating economic transactions. We cover seminal and more recent research suggesting how the game-theoretic approach in economics relies on trust to explain market transactions between two parties – individuals and firms. We also study how the experimental results of the trust game (and assorted variations) introduced by economists helps better our understanding of how trust affects the underlying dynamic in dyadic transactions. Relationships between trust levels in society and macroeconomic growth are also explored.
Psychotic disorders are characterized by problems in interpersonal, social functioning. Paranoid ideation reflects severe suspiciousness and distrust in others. However, the neural mechanisms underlying these social symptoms are largely unknown. Here, we discuss studies investigating trust in psychosis by means of the interactive trust game, and through trustworthiness ratings of faces. Across all the stages of the continuum reduced baseline trust was found in various studies, possibly suggesting a trait-like vulnerability for psychosis. In repeated interactions chronic patients engage less in trust honouring interactions, although they show intact reactions to facial expressions. Overall, first-episode patients and individuals at high risk for psychosis also show reduced trust, but are able to learn to trust over repeated interactions. Several factors that can influence trust are discussed. At the neural level, differential activation in brain regions associated with theory of mind and reward processing were found in individuals with psychosis across illness stages. Theoretical accounts considering motivation, cognition and affect are discussed and suggestions for future research are formulated.
Understanding when to trust and establishing judgments of trustworthiness are complex processes that are critical and essential for human life. Appropriate judgments in trustworthiness lead to the formation of cooperative, mutually beneficial relationships that facilitate personal success, a sense of achievement, increased well-being, and quality of life. The trust game is an economic decision-making game that was specifically designed to measure trust. It is an important and unique instrument, as it measures the entirety of the trust process. Research investigating brain activation during participation of the trust game has shown many brain regions and networks involved in the processes of trust. Whether some of these regions are necessary for various trust processes has been determined by studying trust game performances in individuals with lesions in specific trust-related brain areas. This chapter reviews lesion studies in patients with damage to the insula, amygdala, and prefrontal cortex, with a focus on how such patients perform on various aspects of the trust game and how the findings have informed our understanding of the neuroanatomical correlates of trust. Additionally, we review briefly some functional neuroimaging research on the involvement of the temporal parietal junction and ventral striatum in the trust process.
Psychopathological descriptions, diagnostic criteria, and experimental studies suggest issues with trust across the range of different personality disorders. While the majority of findings refers to Borderline Personality Disorder, the studies investigating trust issues in other personality disorders suggest differential patterns of trust impairments, but also common determinants. For example, traumatization during childhood and adolescence seems to be important for alterations in trust across the spectrum of personality disorders. In this chapter, we describe the definition and classification of different personality disorders, report findings elucidating the specific importance of issues with trust in this group of mental disorders and present therapeutic approaches aiming to restore trust. Most of the empirical studies focus on self-reports and behavioral indices of trust. In contrast neurobiological studies investigating the neuronal correlates of trust impairments are extremely sparse. One exception are studies on the effects of the prosocial neuropeptide oxytocin which emphasize that the mechanism underlying alterations of trust in personality disorders are complex. At the end of the chapter, we discuss implications for future research on trust that may contribute to our understanding of impairments in trust in personality disorders and thereby help to improve the treatment options for this domain of interpersonal dysfunction.