We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Accurately estimating risk preferences is of critical importance when evaluating data from many economic experiments or strategic interactions. I use a simulation model to conduct power analyses over two lottery batteries designed to classify individual subjects as being best explained by one of a number of alternative specifications of risk preference models. I propose a case in which there are only two possible alternatives for classification and find that the statistical methods used to classify subjects result in type I and type II errors at rates far beyond traditionally acceptable levels. These results suggest that subjects in experiments must make significantly more choices, or that traditional lottery pair batteries need to be substantially redesigned to make accurate inferences about the risk preference models that characterize a subject’s choices.
An article published in 2018 by J.D. Hamilton gained significant attention due to its provocative title, “Why you should never use the Hodrick-Prescott filter.” Additionally, an alternative method for detrending, the Hamilton regression filter (HRF), was introduced. His work was frequently interpreted as a proposal to substitute the Hodrick–Prescott (HP) filter with HRF, therefore utilizing and understanding it similarly as HP detrending. This research disputes this perspective, particularly in relation to quarterly business cycle data on aggregate output. Focusing on economic fluctuations in the United States, this study generates a large amount of artificial data that follow a known pattern and include both a trend and cyclical component. The objective is to assess the effectiveness of a certain detrending approach in accurately identifying the real decomposition of the data. In addition to the standard HP smoothing parameter of $\lambda = 1600$, the study also examines values of $\lambda ^{\star }$ from earlier research that are seven to twelve times greater. Based on three unique statistical measures of the discrepancy between the estimated and real trends, it is evident that both versions of HP significantly surpass those of HRF. Additionally, HP with $\lambda ^{\star }$ consistently outperforms HP-1600.
We propose to establish wine rankings using scores that depend on the differences between favorable and unfavorable opinions about each wine, according to the Borda rule. Unlike alternative approaches and specifications, this method is well-defined even if the panelists’ quality relations are not required to exhibit demanding properties such as transitivity or acyclicity. As an illustration, we apply the method to rank wines assessed by different experts and compare the resulting ranking with that obtained according to Condorcet's method of majority voting.
This article is a response to a piece in this journal, by David Bartram, which questions the validity of a vast literature establishing consistently a U-shaped relationship between age and happiness. There are 618 published studies that find U-shapes in that relationship in 145 countries, and only a handful that do not. Of the 30 countries that Bartram (2023, National Institute Economic Review, 1–15) examines, he finds U-shapes in 18. We show compelling evidence of U-shapes in the remaining dozen countries. Supporting evidence of a U-shape is found in objective measures including deaths of despair, depression, stress and pain that are worst in midlife.
Modeling and forecasting of mortality rates are closely related to a wide range of actuarial practices, such as the designing of pension schemes. To improve the forecasting accuracy, age coherence is incorporated in many recent mortality models, which suggests that the long-term forecasts will not diverge infinitely among age groups. Despite their usefulness, misspecification is likely to occur for individual mortality models when applied in empirical studies. The reliableness and accuracy of forecast rates are therefore negatively affected. In this study, an ensemble averaging or model averaging (MA) approach is proposed, which adopts age-specific weights and asymptotically achieves age coherence in mortality forecasting. The ensemble space contains both newly developed age-coherent and classic age-incoherent models to achieve the diversity. To realize the asymptotic age coherence, consider parameter errors, and avoid overfitting, the proposed method minimizes the variance of out-of-sample forecasting errors, with a uniquely designed coherent penalty and smoothness penalty. Our empirical data set include ten European countries with mortality rates of 0–100 age groups and spanning 1950–2016. The outstanding performance of MA is presented using the empirical sample for mortality forecasting. This finding robustly holds in a range of sensitivity analyses. A case study based on the Italian population is finally conducted to demonstrate the improved forecasting efficiency of MA and the validity of the proposed estimation of weights, as well as its usefulness in actuarial applications such as the annuity pricing.
The outcome of the famous 1976 Judgment of Paris, a blind wine tasting of ten wines by nine French judges, brought American wines to the forefront of the wine business. A Californian wine, the 1973 Stag's Leap Wine Cellars S.L.V. Cabernet Sauvignon, was declared the winner, surpassing four highly prized French wines (Château Mouton-Rothschild 1970, Château Montrose 1970, Château Haut-Brion 1970, and Château Léoville Las Cases 1971). We collect ratings from experts for (almost) all vintages of the same ten wines over the years 1968–2021 and find that the Stag's Leap Cabernet Sauvignon is far from being first. We conclude that either the 1973 vintage was overrated by the experts who tasted it in 1976, or 1973 was merely an outlier in this winery.
Discrete choice experiments are used to collect data that facilitates measurement and understanding of consumer preferences. A sample of 750 respondents was employed to evaluate a new method of best-worst scaling data collection. This new method decreased the number of attributes and questions while discerning preferences for a larger set of attributes through self-stated preference “filter” questions. The new best-worst method resulted in overall equivalent rates of transitivity violations and lower incidences of attribute non-attendance than standard best-worst scaling designs. The new method of best-worst scaling data collection can be successfully employed to efficiently evaluate more attributes while improving data quality.
This paper investigates a high-dimensional vector-autoregressive (VAR) model in mortality modeling and forecasting. We propose an extension of the sparse VAR (SVAR) model fitted on the log-mortality improvements, which we name “spatially penalized smoothed VAR” (SSVAR). By adaptively penalizing the coefficients based on the distances between ages, SSVAR not only allows a flexible data-driven sparsity structure of the coefficient matrix but simultaneously ensures interpretable coefficients including cohort effects. Moreover, by incorporating the smoothness penalties, divergence in forecast mortality rates of neighboring ages is largely reduced, compared with the existing SVAR model. A novel estimation approach that uses the accelerated proximal gradient algorithm is proposed to solve SSVAR efficiently. Similarly, we propose estimating the precision matrix of the residuals using a spatially penalized graphical Lasso to further study the dependency structure of the residuals. Using the UK and France population data, we demonstrate that the SSVAR model consistently outperforms the famous Lee–Carter, Hyndman–Ullah, and two VAR-type models in forecasting accuracy. Finally, we discuss the extension of the SSVAR model to multi-population mortality forecasting with an illustrative example that demonstrates its superiority in forecasting over existing approaches.
This paper provides a toolbox for the credibility analysis of frequency risks, with allowance for the seniority of claims and of risk exposure. We use Poisson models with dynamic and second-order stationary random effects that ensure nonnegative credibilities per period. We specify classes of autocovariance functions that are compatible with positive random effects and that entail nonnegative credibilities regardless of the risk exposure. Random effects with nonnegative generalized partial autocorrelations are shown to imply nonnegative credibilities. This holds for ARFIMA(0, d, 0) models. The AR(p) time series that ensure nonnegative credibilities are specified from their precision matrices. The compatibility of these semiparametric models with log-Gaussian random effects is verified. Gaussian sequences with ARFIMA(0, d, 0) specifications, which are then exponentiated entrywise, provide positive random effects that also imply nonnegative credibilities. Dynamic random effects applied to Poisson distributions are retained as products of two uncorrelated and positive components: the first is time-invariant, whereas the autocovariance function of the second vanishes at infinity and ensures nonnegative credibilities. The limit credibility is related to the three levels for the length of the memory in the random effects. The limit credibility is less than one in the short memory case, and a formula is provided.
Floriculture value exceeds $5.8 billion in the United States. Environmental challenges, market trends, and diseases complicate breeding priorities. To inform breeders’ and geneticists’ research efforts, we set out to gather consumers’ preferences in the form of willingness to pay (WTP) for different rose attributes in a discrete choice experiment. The responses are modeled in WTP space, using polynomials to account for heterogeneity. Consumer preferences indicate that heat and disease tolerance were the most important aspects for subjects in the sample, followed by drought resistance. To the best of our knowledge, this is the first study to identify breeding priorities in rosaceous plants from a consumer perspective.
Following the EU Gender Directive, that obliges insurance companies to charge the same premium to policyholders of different genders, we address the issue of calculating solvency capital requirements (SCRs) for pure endowments and annuities issued to mixed portfolios. The main theoretical result is that, if the unisex fairness principle is adopted for the unisex premium, the SCR at issuing time of the mixed portfolio calculated with unisex survival probabilities is greater than the sum of the SCRs of the gender-based subportfolios. Numerical results show that for pure endowments the gap between the two is negligible, but for lifetime annuities the gap can be as high as 3–4%. We also analyze some conservative pricing procedures that deviate from the unisex fairness principle, and find that they lead to SCRs that are lower than the sum of the gender-based SCRs because the policyholders are overcharged at issuing time.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.