To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The analysis of data from experiments in economics routinely involves testing multiple null hypotheses simultaneously. These different null hypotheses arise naturally in this setting for at least three different reasons: when there are multiple outcomes of interest and it is desired to determine on which of these outcomes a treatment has an effect; when the effect of a treatment may be heterogeneous in that it varies across subgroups defined by observed characteristics and it is desired to determine for which of these subgroups a treatment has an effect; and finally when there are multiple treatments of interest and it is desired to determine which treatments have an effect relative to either the control or relative to each of the other treatments. In this paper, we provide a bootstrap-based procedure for testing these null hypotheses simultaneously using experimental data in which simple random sampling is used to assign treatment status to units. Using the general results in Romano and Wolf (Ann Stat 38:598–633, 2010), we show under weak assumptions that our procedure (1) asymptotically controls the familywise error rate—the probability of one or more false rejections—and (2) is asymptotically balanced in that the marginal probability of rejecting any true null hypothesis is approximately equal in large samples. Importantly, by incorporating information about dependence ignored in classical multiple testing procedures, such as the Bonferroni and Holm corrections, our procedure has much greater ability to detect truly false null hypotheses. In the presence of multiple treatments, we additionally show how to exploit logical restrictions across null hypotheses to further improve power. We illustrate our methodology by revisiting the study by Karlan and List (Am Econ Rev 97(5):1774–1793, 2007) of why people give to charitable causes.
The nonparametric Wilcoxon-Mann-Whitney test is commonly used by experimental economists for detecting differences in central tendency between two samples. This test is only theoretically appropriate under certain assumptions concerning the population distributions from which the samples are drawn, and is often used in cases where it is unclear whether these assumptions hold, and even when they clearly do not hold. Fligner and Pollicello's (1981, Journal of the American Statistical Association. 76, 162-168) robustrank-ordertestis a modification of the Wilcoxon-Mann-Whitney test, designed to be appropriate in more situations than Wilcoxon-Mann-Whitney. This paper uses simulations to compare the performance of the two tests under a variety of distributional assumptions. The results are mixed. The robust rank-order test tends to yield too many false positive results for mediumsized samples, but this liberalness is relatively invariant across distributional assumptions, and seems to be due to a deficiency of the normal approximation to its test statistic's distribution, rather than the test itself. The performance of the Wilcoxon-Mann-Whitney test varies hugely, depending on the distributional assumptions; in some cases, it is conservative, in others, extremely liberal. The tests have roughly similar power. Overall, the robust rank-order test performs better than Wilcoxon-Mann-Whitney, though when critical values for the robust rank-order test are not available, so that the normal approximation must be used, their relative performance depends on the underlying distributions, the sample sizes, and the level of significance used.
This paper introduces a mixture model based on the beta distribution, without pre-established means and variances, to analyze a large set of Beauty-Contest data obtained from diverse groups of experiments (Bosch-Domènech et al. 2002). This model gives a better fit of the experimental data, and more precision to the hypothesis that a large proportion of individuals follow a common pattern of reasoning, described as Iterated Best Reply (degenerate), than mixture models based on the normal distribution. The analysis shows that the means of the distributions across the groups of experiments are pretty stable, while the proportions of choices at different levels of reasoning vary across groups.
It is shown that for two dimensional commodity spaces any homothetic utility function that rationalizes each pair of observations in a set of consumption data also rationalizes the entire set. The result is used to provide a simplified nonparametric test for homotheticity of demand and a measure for homothetic efficiency. The article thus provides a useful tool to screen data for severe violations of homotheticity before estimating parameters of homothetic utility functions. The new test and measure are applied to previously published data.
This article surveys the use of nonparametric permutation tests for analyzing experimental data. The permutation approach, which involves randomizing or permuting features of the observed data, is a flexible way to draw statistical inferences in common experimental settings. It is particularly valuable when few independent observations are available, a frequent occurrence in controlled experiments in economics and other social sciences. The permutation method constitutes a comprehensive approach to statistical inference. In two-treatment testing, permutation concepts underlie popular rank-based tests, like the Wilcoxon and Mann–Whitney tests. But permutation reasoning is not limited to ordinal contexts. Analogous tests can be constructed from the permutation of measured observations—as opposed to rank-transformed observations—and we argue that these tests should often be preferred. Permutation tests can also be used with multiple treatments, with ordered hypothesized effects, and with complex data-structures, such as hypothesis testing in the presence of nuisance variables. Drawing examples from the experimental economics literature, we illustrate how permutation testing solves common challenges. Our aim is to help experimenters move beyond the handful of overused tests in play today and to instead see permutation testing as a flexible framework for statistical inference.
The intergroup prisoner's dilemma game was suggested by Bornstein (1992, Journal of Personality and Social Psychology. 7, 597-606) for modelling intergroup conflicts over continuous public goods. We analyse data of an experiment in which the game was played for 150 rounds, under three matching conditions. The objective is to study differences in the investment patterns of players in the different groups. A repeated measures analysis was conducted by Goren and Bornstein (1999, Games and Human Behaviour: Essays in Honor of Amnon Rapoport, pp. 299-314), involving data aggregation and strong distributional assumptions. Here we introduce a nonparametric approach based on permutation tests. Two new measures, the cumulative investment and the normalised cumulative investment, provide additional insight into the differences between groups. The proposed tests are based on the area under the investment curves. They identify an overall difference between the groups as well as pairwise differences. A simultaneous confidence band for the mean difference curve is used to detect the games which account for any pairwise difference.
Recently, there has been a surge in interest in exploring how common macroeconomic factors impact different economic results. We propose a semiparametric dynamic panel model to analyze the impact of common regressors on the conditional distribution of the dependent variable (global output growth distribution in our case). Our model allows conditional mean, variance, and skewness to be influenced by common regressors, whose effects can be nonlinear and time-varying driven by contextual variables. By incorporating dynamic structures and individual unobserved heterogeneity, we propose a consistent two-step estimator and showcase its attractive theoretical and numerical properties. We apply our model to investigate the impact of US financial uncertainty on the global output growth distribution. We find that an increase in US financial uncertainty significantly shifts the output growth distribution leftward during periods of market pessimism. In contrast, during periods of market optimism, the increased uncertainty in the US financial markets expands the spread of the output growth distribution without a significant location change, indicating increased future uncertainty.
Decision making can be a complex process requiring the integration of several attributes of choice options. Understanding the neural processes underlying (uncertain) investment decisions is an important topic in neuroeconomics. We analyzed functional magnetic resonance imaging (fMRI) data from an investment decision study for stimulus-related effects. We propose a new technique for identifying activated brain regions: cluster, estimation, activation, and decision method. Our analysis is focused on clusters of voxels rather than voxel units. Thus, we achieve a higher signal-to-noise ratio within the unit tested and a smaller number of hypothesis tests compared with the often used General Linear Model (GLM). We propose to first conduct the brain parcellation by applying spatially constrained spectral clustering. The information within each cluster can then be extracted by the flexible dynamic semiparametric factor model (DSFM) dimension reduction technique and finally be tested for differences in activation between conditions. This sequence of Cluster, Estimation, Activation, and Decision admits a model-free analysis of the local fMRI signal. Applying a GLM on the DSFM-based time series resulted in a significant correlation between the risk of choice options and changes in fMRI signal in the anterior insula and dorsomedial prefrontal cortex. Additionally, individual differences in decision-related reactions within the DSFM time series predicted individual differences in risk attitudes as modeled with the framework of the mean-variance model.
Is the quality of a 91-point wine significantly different from that of an 89-point wine? Which wines are underpriced relative to their evaluation of quality? This paper addresses these questions by constructing a novel wine rating system based on scores assigned by a panel of wine experts to a set of wines. Wines are classified in ranked disjoint quality equivalence classes using measures of statistically significant and commercially relevant score differences. The rating system is applied to the “Judgment of Paris” wine competition, to data of Bordeaux en-primeur expert scores and prices, and to expert scores and price categories of a large database of Italian wines. The proposed wine rating system provides an informative assessment of wine quality for producers and consumers and a flexible rating methodology for commercial applications.
The purpose of this paper is to analyse the effects of natural resources on income inequality conditional on economic complexity in 111 developed and developing countries from 1995 to 2016. The system-GMM results show that economic complexity reverses the positive effects of natural resource dependence on income inequality. Furthermore, results are robust to the distinction between dependence on point resources (fossil fuels, ores, and metals), dependence on diffuse resources (agricultural raw material), and resource abundance. Finally, there are significant differences between countries, depending on the level of ethnic fragmentation and democracy.
Risk measurements are clearly central to risk management, in particular for banks, (re)insurance companies, and investment funds. The question of the appropriateness of risk measures for evaluating the risk of financial institutions has been heavily debated, especially after the financial crisis of 2008/2009. Another concern for financial institutions is the pro-cyclicality of risk measurements. In this paper, we extend existing work on the pro-cyclicality of the Value-at-Risk to its main competitors, Expected Shortfall, and Expectile: We compare the pro-cyclicality of historical quantile-based risk estimation, taking into account the market state. To characterise the latter, we propose various estimators of the realised volatility. Considering the family of augmented GARCH(p, q) processes (containing well-known GARCH models and iid models, as special cases), we prove that the strength of pro-cyclicality depends on the three factors: the choice of risk measure and its estimators, the realised volatility estimator and the model considered, but, no matter the choices, the pro-cyclicality is always present. We complement this theoretical analysis by performing simulation studies in the iid case and developing a case study on real data.
While the linkage between team performance and attendances is well established, there has been negligible previous research using club memberships as an alternative indicator of demand for sport. Little attention has been paid to how the number of memberships is affected by common measures of team performance, such as the team’s win-ratio. This study utilises a previously unavailable long range time-series data set of annual memberships for an Australian Football League (AFL) club, Hawthorn FC. A succession of basic correlation analyses demonstrates that, while the relation between club membership numbers and win-ratios is strongly positive as it is for attendances (for most of the sample), some of the finer properties are substantially different. It is suggested that much of the reason for this lies in differences between the segmented nature of these markets for attendances and memberships.
This paper uses a difference-in-differences approach to analyze the treatment effect of a hail weather shock in a specific Swiss wine-growing region. We exploit a natural experiment from Switzerland's Three Lakes wine region in 2013 and examine its impact on the country's retail market. We find statistically significant (1%-level) effects of –22.8% and +2.8% for the volume and price of wine consumed, respectively. These effects can be interpreted as average treatment effects, which is the difference in outcomes between treatment and control groups using a pre-post shock study methodology. Several robustness checks confirm the statistical significance of the estimated effects and the initial assumptions.
The study applies parametric and nonparametric estimation methods to determine hedonic prices of rice quality attributes, and a partial equilibrium model to determine the payoff to investing in quality improvement in five countries in Sub-Saharan Africa. Results indicate that consumers are willing to pay price premiums for head rice, slender grains, peak viscosity, parboiled rice, and rice sold in urban markets. However, they strongly discount amylose content, rice with impurities and imported rice. Investing in quality improvement through amylose content reduction leads to net welfare gains with a benefit-cost ratio of 47.86 and internal rate of return of 90%.
We estimate a travel cost model for the George Washington & Jefferson National Forests using an On-Site Latent Class Poisson Model. We show that the constraints of ad-hoc truncation and homogenous preferences significantly impact consumer surplus estimates derived from the on-site travel cost model. By relaxing the constraints, we show that more than one class of visitors with unique preferences exists in the population. The resulting demand functions, price responsive behaviors, and consumer surplus estimates reflect differences across these classes of visitors. With heterogeneous preferences, a group of ‘local residents’ exists with a probability of 8% and, on average take 113 visits.
A machine learning approach to zero-inflated Poisson (ZIP) regression is introduced to address common difficulty arising from imbalanced financial data. The suggested ZIP can be interpreted as an adaptive weight adjustment procedure that removes the need for post-modeling re-calibration and results in a substantial enhancement of predictive accuracy. Notwithstanding the increased complexity due to the expanded parameter set, we utilize a cyclic coordinate descent optimization to implement the ZIP regression, with adjustments made to address saddle points. We also study how various approaches alleviate the potential drawbacks of incomplete exposures in insurance applications. The procedure is tested on real-life data. We demonstrate a significant improvement in performance relative to other popular alternatives, which justifies our modeling techniques.
To evaluate a large portfolio of variable annuity (VA) contracts, many insurance companies rely on Monte Carlo simulation, which is computationally intensive. To address this computational challenge, machine learning techniques have been adopted in recent years to estimate the fair market values (FMVs) of a large number of contracts. It is shown that bootstrapped aggregation (bagging), one of the most popular machine learning algorithms, performs well in valuing VA contracts using related attributes. In this article, we highlight the presence of prediction bias of bagging and use the bias-corrected (BC) bagging approach to reduce the bias and thus improve the predictive performance. Experimental results demonstrate the effectiveness of BC bagging as compared with bagging, boosting, and model points in terms of prediction accuracy.
Are institutions a deep cause of economic growth? This paper tries to answer this question in a novel manner by focusing on within-country variation, over long periods of time, using a new hand-collected data set on institutions and the power-ARCH econometric framework. Focusing on the case of Brazil since 1870, our results suggest (a) that both changes in formal political institutions and informal political instability affect economic growth negatively, (b) there are important differences in terms of their short- versus long-run behaviour, and (c) not all but just a few selected institutions affect economic growth in the long-run.
This paper provides a toolbox for the credibility analysis of frequency risks, with allowance for the seniority of claims and of risk exposure. We use Poisson models with dynamic and second-order stationary random effects that ensure nonnegative credibilities per period. We specify classes of autocovariance functions that are compatible with positive random effects and that entail nonnegative credibilities regardless of the risk exposure. Random effects with nonnegative generalized partial autocorrelations are shown to imply nonnegative credibilities. This holds for ARFIMA(0, d, 0) models. The AR(p) time series that ensure nonnegative credibilities are specified from their precision matrices. The compatibility of these semiparametric models with log-Gaussian random effects is verified. Gaussian sequences with ARFIMA(0, d, 0) specifications, which are then exponentiated entrywise, provide positive random effects that also imply nonnegative credibilities. Dynamic random effects applied to Poisson distributions are retained as products of two uncorrelated and positive components: the first is time-invariant, whereas the autocovariance function of the second vanishes at infinity and ensures nonnegative credibilities. The limit credibility is related to the three levels for the length of the memory in the random effects. The limit credibility is less than one in the short memory case, and a formula is provided.
This article explores the global cycle hypothesis by testing whether the US stock market serves as an explanatory variable for the evolution of expansions and contractions in the UK stock market from 1922 until 2016. Alternatively, it tests an index that groups the stock markets of advanced economies to identify whether this driving force is international. Second, regarding co-movement with the US, the article explores whether its time-varying nature is contingent on the domestic and international economic policy regimes. I find evidence that there is a strong and contemporaneous co-movement between the US and UK stock markets. Additionally, through a VAR model, I identify that the movements in the UK stock market cause, in the Granger sense, changes in the index for advanced economies up to two years later. Furthermore, in the short-run co-movement between the US and UK stock markets is contingent on the macroeconomic trilemma while, in the long run, both domestic and international policy regimes affect the relationship. A final contribution is the design of a new methodology for describing the evolution of financial time series as risk-adjusted above or below average returns to different time horizons: the Local Bull Bear Indicators (LBBIs).