To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This textbook introduces the fundamentals of MATLAB for behavioral sciences in a concise and accessible way. Written for those with or without computer programming experience, it works progressively from fundamentals to applied topics, culminating in in-depth projects. Part I covers programming basics, ensuring a firm foundation of knowledge moving forward. Difficult topics, such as data structures and program flow, are then explained with examples from the behavioral sciences. Part II introduces projects for students to apply their learning directly to real-world problems in computational modelling, data analysis, and experiment design, with an exploration of Psychtoolbox. Accompanied by online code and datasets, extension materials, and additional projects, with test banks, lecture slides, and a manual for instructors, this textbook represents a complete toolbox for both students and instructors.
Existing approaches to conducting inference about the Local Average Treatment Effect or LATE require assumptions that are considered tenuous in many applied settings. In particular, Instrumental Variable techniques require monotonicity and the exclusion restriction while principal score methods rest on some form of the principal ignorability assumption. This paper provides new results showing that an estimator within the class of principal score methods allows conservative inference about the LATE without invoking such assumptions. I term this estimator the Compliance Probability Weighting estimator and show that, under very mild assumptions, it provides an asymptotically conservative estimator for the LATE. I apply this estimator to a recent survey experiment and provide evidence of a stronger effect for the subset of compliers than the original authors had uncovered.
In experimental social science, precise treatment effect estimation is of utmost importance, and researchers can make design choices to increase precision. Specifically, block-randomized and pre-post designs are promoted as effective means to increase precision. However, implementing these designs requires pre-treatment covariates, and collecting this information may decrease sample sizes, which in and of itself harms precision. Therefore, despite the literature’s recommendation to use block-randomized and pre-post designs, it remains unclear when to expect these designs to increase precision in applied settings. We use real-world data to demonstrate a counterintuitive result: precision gains from block-randomized or pre-post designs can withstand significant sample loss that may arise during implementation. Our findings underscore the importance of incorporating researchers’ practical concerns into existing experimental design advice.
When it comes to experiments with multiple-round decisions under risk, the current payoff mechanisms are incentive compatible with either outcome weighting theories or probability weighting theories, but not both. In this paper, I introduce a new payoff mechanism, the Accumulative Best Choice (“ABC”) mechanism that is incentive compatible for all rational risk preferences. I also identify three necessary and sufficient conditions for a payoff mechanism to be incentive compatible for all models of decision under risk with complete and transitive preferences. I show that ABC is the unique incentive compatible mechanism for rational risk preferences in a multiple-task setting. In addition, I test empirical validity of the ABC mechanism in the lab. The results from both a choice pattern experiment and a preference (structural) estimation experiment show that individual choices under the ABC mechanism are statistically not different from those observed with the one-round task experimental design. The ABC mechanism supports unbiased elicitation of both outcome and probability transformations as well as testing alternative decision models that do or do not include the independence axiom.
Conventional value-elicitation experiments often find subjects provide higher valuations for items they posses than for identical items they may acquire. Plott and Zeiler (Am Econ Rev 95:530–545, 2005) replicate this willingness-to-pay/willingness-to-accept “gap” with conventional experimental procedures, but find no gap after implementing procedures that provide for subject anonymity and familiarity with the second-price mechanism. This paper investigates whether anonymity is necessary for their result. We employ both types of procedures with and without anonymity. Contrary to predictions of one theory—which suggest social pressures may cause differences in subject valuations—we find, regardless of anonymity, conventional procedures generate gaps and Plott and Zeiler’s does not. These findings strongly suggest subject familiarity with elicitation mechanisms, not anonymity, is responsible for the variability in results across value-elicitation experiments. As an application to experimental design methodology, there appears to be little need to impose anonymity when using second-price mechanisms in standard consumer good experiments.
Eliciting the level of risk aversion of experimental subjects is of crucial concern to experimenters. In the literature there are a variety of methods used for such elicitation; the concern of the experiment reported in this paper is to compare them. The methods we investigate are the following: Holt–Laury price lists; pairwise choices, the Becker–DeGroot–Marschak method; allocation questions. Clearly their relative efficiency in measuring risk aversion depends upon the numbers of questions asked; but the method itself may well influence the estimated risk-aversion. While it is impossible to determine a ‘best’ method (as the truth is unknown) we can look at the differences between the different methods. We carried out an experiment in four parts, corresponding to the four different methods, with 96 subjects. In analysing the data our methodology involves fitting preference functionals; we use four, Expected Utility and Rank-Dependent Expected Utility, each combined with either a CRRA or a CARA utility function. Our results show that the inferred level of risk aversion is more sensitive to the elicitation method than to the assumed-true preference functional. Experimenters should worry most about context.
An important issue for many economic experiments is how the experimenter can ensure sufficient power in order to reject one or more hypotheses. The paper illustrates how methods for testing multiple hypotheses simultaneously in adaptive, two-stage designs can be used to improve the power of economic experiments. We provide a concise overview of the relevant theory and illustrate the method in three different applications. These include a simulation study of a hypothetical experimental design, as well as illustrations using two data sets from previous experiments. The simulation results highlight the potential for sample size reductions, maintaining the power to reject at least one hypothesis while ensuring strong control of the overall Type I error probability.
We replicate an influential study of monetary incentive effects by Jamal and Sunder (1991) to illustrate the difficulties of drawing causal inferences from a treatment manipulation when other features of the experimental design vary simultaneously. We first show that the Jamal and Sunder (1991) conclusions hinge on one of their laboratory market sessions, conducted only within their fixed-pay condition, that is characterized by a thin market and asymmetric supply and demand curves. When we replicate this structure multiple times under both fixed pay and pay tied to performance, our findings do not support Jamal and Sunder's (1991) conclusion about the incremental effects of performance-based compensation, suggesting that other features varied in that study likely account for their observed difference. Our ceteris paribus replication leaves us unable to offer any generalized conclusions about the effects of monetary incentives in other market structures, but the broader point is to illustrate that experimental designs that attempt to generalize effects by varying multiple features simultaneously can jeopardize the ability to draw causal inferences about the primary treatment manipulation.
Azrieli et al. (J Polit Econ, 2018) provide a characterization of incentive compatible payment mechanisms for experiments, assuming subjects’ preferences respect dominance but can have any possible subjective beliefs over random outcomes. If instead we assume subjects view probabilities as objective—for example, when dice or coins are used—then the set of incentive compatible mechanisms may grow. In this paper we show that it does, but the added mechanisms are not widely applicable. As in the subjective-beliefs framework, the only broadly-applicable incentive compatible mechanism (assuming all preferences that respect dominance are admissible) is to pay subjects for one randomly-selected decision.
Outcomes and strategies shown in control questions prior to experimental play may provide subjects with anchors or induce experimenter demand effects. In a Cournot oligopoly experiment we explore whether control questions influence subjects’ choices in initial periods and over the course of a repeated game. We vary the framing of the control question to explore the cause of potential influences. We find no evidence for an influence of the control question on choices, neither in the first period nor later in the game.
We investigate whether informal support is sensitive to the extent to which individuals can influence their income risk exposure by opting into risk. In a laboratory experiment with slum dwellers in Nairobi, we measure subjects’ transfers to a worse-off partner under both random assignment, and self-selection into a safe or risky project. Our experimental design allows us to discriminate between different possible explanations for why giving behaviour might change when risk exposure is self-selected. We find that solidary support is independent of the partners’ choice of risk exposure, which contradicts attributions of responsibility for neediness and ex-post choice egalitarianism. Instead, we find that support depends on donors’ risk preferences. Risk-takers seem to feel less obliged to share the profits they earn from their choices compared to subjects who earn equally high profits by pure luck. Our results have important implications for anti-poverty policies that aim at encouraging risky investments.
Experimenter demand effects refer to changes in behavior by experimental subjects due to cues about what constitutes appropriate behavior. We argue that they can either be social or purely cognitive, and that, when they may exist, it crucially matters how they relate to the true experimental objectives. They are usually a potential problem only when they are positively correlated with the true experimental objectives’ predictions, and we identify techniques such as non-deceptive obfuscation to minimize this correlation. We discuss the persuasiveness or otherwise of defenses that can be used against demand effects criticisms when such correlation remains an issue.
Experimental economics represents a strong growth industry. In the past several decades the method has expanded beyond intellectual curiosity, now meriting consideration alongside the other more traditional empirical approaches used in economics. Accompanying this growth is an influx of new experimenters who are in need of straightforward direction to make their designs more powerful. This study provides several simple rules of thumb that researchers can apply to improve the efficiency of their experimental designs. We buttress these points by including empirical examples from the literature.
Incentivized methods for eliciting subjective probabilities in economic experiments present the subject with risky choices that encourage truthful reporting. We discuss the most prominent elicitation methods and their underlying assumptions, provide theoretical comparisons and give a new justification for the quadratic scoring rule. On the empirical side, we survey the performance of these elicitation methods in actual experiments, considering also practical issues of implementation such as order effects, hedging, and different ways of presenting probabilities and payment schemes to experimental subjects. We end with a discussion of the trade-offs involved in using incentives for belief elicitation and some guidelines for implementation.
Berg et al. (Games and Economic Behavior, 10, pp. 122-142, 1995) study trust and reciprocity in an investment setting. They find significant amounts of trust and reciprocity and conclude that trust is a guiding behavioral instinct (a “primitive” in their terminology). We modify the way information is presented to participants and, through a questionnaire, prompt strategic reasoning. To our surprise, none of our various treatments led to a reduction in the amount invested. Previously reported experimental results to the contrary did not survive replication. Our results suggest that those by Berg, Dickhaut, and McCabe are rather robust to changes in information presentation and strategic reasoning prompts. We discuss the implications of these findings.
In experimental economics, where subjects participate in different sessions, observations across subjects of a given session might exhibit more correlation than observations across subjects in different sessions. The main goal of this paper is to clarify what are session-effects: what can cause them, what forms they can take, and what are the potential problems. It will be shown that standard solutions are at times inadequate, and that their properties are sometimes misunderstood.
This editorial suggests ways in which mental health science reform could yield more robust research and faster clinical progress. These include better animal and other models, a shift to transdiagnostic and clinically pragmatic classification systems, improved measurement, mission mapping and an entrepreneurial mindset aimed at taking advances rapidly to scale.
We examine the generalizability of single-topic studies, focusing on how often their confidence intervals capture the typical treatment effect from a larger population of possible studies. We show that the confidence intervals from these single-topic studies capture the typical effect from a population of topics at well below the nominal rate. For a plausible scenario, the confidence interval from a single-topic study might only be half as wide as an interval that captures the typical effect at the nominal rate. We highlight three important conclusions. First, we emphasize that researchers and readers must take care when generalizing the inferences from single-topic studies to a larger population of possible studies. Second, we demonstrate the critical importance of similarity across topics in drawing inferences and encourage researchers to consider designs that explicitly estimate and leverage similarity. Third, we emphasize that, despite their limitations, single-topic experiments have some important advantages.
Experimental asset markets with a constant fundamental value () have grown in importance in recent years. A methodological examination of the robustness of experimental results in such a setting which has been shown to produce bubbles, however, is lacking. In a laboratory experiment with 280 subjects, we investigate whether specific design features are sufficient to influence experimental results. In detail, we (1) vary the visual representation of the price chart, and (2) provide subjects with full information about the FV process. We find overvaluation and bubble formation to be reduced when trading prices are displayed at the upper end of the price chart. Surprisingly, we do not find any effects when subjects have full information about the FV process.
Most economic experiments designed to test theories carefully choose specific games. This paper reports on an experimental design to evaluate how well the minimax hypothesis describes behavior across a population of games. Past studies suggest that the hypothesis is more accurate the closer the equilibrium is to equal probability play of all actions, but many differences between the designs makes direct comparison impossible. We examine the minimax hypothesis by randomly sampling constant sum games with two players and two actions with a unique equilibrium in mixed strategies. Only varying the games, we find behavior is more consistent with minimax play the closer the mixed strategy equilibrium is to equal probability play of each action. The results are robust over all iterations as well as early and final play. Experimental designs in which the game is a variable allow some conclusions to be drawn that cannot be drawn from more conventional experimental designs.