To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter shows how much the foreign exchange market has changed and developed over the years. It also reveals what goes on behind the scenes when buying foreign exchange at a bureau de change, paying bills to a company in another country or booking a hotel using a credit card on the internet or by telephone. The days of noisy trading floors where dealers shouted at each other have long gone, replaced by computers and people tapping at keyboards or talking quietly to each other. When did the foreign exchange market start to change and why does it matter?
This chapter uses evidence collected by regulators to demonstrate how dealers’ practice of acting as principals rather than agents put their clients at risk. Containing extensive excerpts from traders’ communications, it demonstrates how they managed their manipulations and in what environment it all took place.
This chapter traces the history of financial regulation in the UK. It challenges the widely held belief that the period from the 1980s witnessed a systematic process of deregulation. In fact, from the 1970s there was a period of increasing regulation up until the mid-2000s, when the government began to encourage a ‘light touch’ approach. The combination of all these factors meant that banks were ill-prepared to meet the financial crisis. In its aftermath, as the banks embarked on the slow path to reiy, making profits was essential. The traders seized any opportunity they could, and it may well be the case that banks were simply relieved that some areas of their business were profitable.
This chapter describes the effects of the Financial Stability Board’s review of interest rate benchmarks. The Board’s report recommended a number of measures to help improve security, notably by underpinning existing IBORS with transactions data and by developing alternative, nearly risk-free rates. New benchmarks would be developed with reference to the ISOCO Principles published in July 2013. The chapter explains these principles and how they were put into practice.
This chapter charts the loss of faith in LIBOR that began to set in during the financial crisis, particularly following two articles in the Wall Street Journal. Investigation by the regulators subsequently revealed that a number of early warnings had been overlooked, and that certain banks had been distorting rates since at least 2005. Drawing on reports by the UK’s Financial Services Authority, the chapter demonstrates the day-to-day manipulation practised by traders at Barclays, the Royal Bank of Scotland and UBS.
This chapter begins with short histories of the London bullion market, including the development of the Gold and Silver Fixes. After the breaking of the LIBOR and foreign exchange scandals, suspicions soon emerged that the gold and silver markets were also being rigged. Initial investigations by the Commodity Futures Trading Commission found no evidence of this, but orders would later be issued against a number of figures, notably trader David Liew, and steps would be taken to protect the system from manipulation.
Credibility theory provides a fundamental framework in actuarial science for estimating policyholder premiums by blending individual claims experience with overall portfolio data. Bühlmann and Bühlmann–Straub credibility models are widely used because, in the Bayesian hierarchical setting, they are the best linear Bayes estimators, minimizing the Bayes risk (expected squared error loss) within the class of linear estimators given the experience data for a particular risk class. To improve estimation accuracy, quadratic credibility models incorporate higher-order terms, capturing more information about the underlying risk structure. This study develops a robust quadratic credibility (RQC) framework that integrates second-order polynomial adjustments of robustly transformed ground-up loss data, such as winsorized moments, to improve stability in the presence of extreme claims or heavy-tailed distributions. Extending semi-linear credibility, RQC maintains interpretability while enhancing statistical efficiency. We establish its asymptotic properties, derive closed-form expressions for the RQC premium, and demonstrate its superior performance in reducing mean square error (MSE). We additionally derive semi-linear credibility structural parameters using winsorized data, further strengthening the robustness of credibility estimation. Analytical comparisons and empirical applications highlight RQC’s ability to capture claim heterogeneity, offering a more reliable and equitable approach to premium estimation. This research advances credibility theory by introducing a refined methodology that balances efficiency, robustness, and practical applicability across diverse insurance settings.
When overdispersion and correlation co-occur in longitudinal count data, as is often the case, an analysis method that can handle both phenomena simultaneously is needed. The correlated Poisson distribution (CPD) proposed by Drezner and Farnum (Communications in Statistics-Theory and Methods, 22(11), 3051–3063, 1994) is a generalization of the classical Poisson distribution with the incorporation of an additional parameter that allows dependence between successive observations of the phenomenon under study. This parameter both measures the correlation and reflects the degree of dispersion. The classical Poisson distribution is obtained as a special case when the correlation is zero. We present an in-depth review of this CPD and discuss some methods to estimate the distribution parameters. The inclusion of regression components in this distribution is enabled by allowing one of the parameters to include available information concerning, in this case, automobile insurance policyholders. The proposed distribution can be viewed as an alternative to the Poisson, negative binomial, and Poisson-inverse Gaussian approaches. We then describe applications of the distribution, suggest it is appropriate for modeling the number of claims in an automobile insurance portfolio, and establish some new distribution properties.
The practice of actuarial science has always been rooted in computation. From the early days of hand-constructed tables and commutation functions to today’s large-scale stochastic simulations and machine learning models, actuaries have continuously adapted their analytical tools to the technology of their time. The rapid growth of high-performance computing, open-source software, and data-driven methodologies now offers new possibilities for actuarial modeling – transforming not only how we calculate, but also how we think about risk, uncertainty, and decision-making. This editorial introduces a thematic collection on Actuarial Software, which showcases recent advances at the intersection of actuarial modeling and computational science.
Fine-grained mortality forecasting has gained momentum in actuarial research due to its ability to capture localized, short-term fluctuations in death rates. This paper introduces MortFCNet, a deep-learning method that predicts weekly death rates using region-specific weather inputs. Unlike traditional Serfling-based methods and gradient-boosting models that rely on predefined fixed Fourier terms and manual feature engineering, MortFCNet automatically learns patterns from raw time-series data without needing explicitly defined Fourier terms or manual feature engineering. Extensive experiments across over 200 NUTS-3 regions in France, Italy, and Switzerland demonstrate that MortFCNet consistently outperforms both a standard Serfling-type baseline and XGBoost in terms of predictive accuracy. Our ablation studies further confirm its ability to uncover complex relationships in the data without feature engineering. Moreover, this work underscores a new perspective on exploring deep learning for advancing fine-grained mortality forecasting.
Inequality is an inherent quality of society. This paper provides actuarial insights into the recognition, measurement, and consequences of inequality. Key underlying concepts are discussed, with an emphasis on the distinction between inequality of opportunity and inequality of outcome. To better design and maintain approaches and programmes that mitigate its adverse effects, it is important to understand its contributing causes. The paper outlines strategies for reflecting on and addressing inequality in actuarial practice. Actuaries are encouraged to work with policymakers, employers, providers, regulators, and individuals in the design and management of sustainable programmes to address some of the critical issues associated with inequality. These programmes can encourage more equal opportunities and protect against the adverse financial effects of outcomes.
Fitting loss distributions in insurance is sometimes a dilemma: either you get a good fit for the small/medium losses or for the very large losses. To be able to get both at the same time, this paper studies generalisations and extensions of the Pareto model that initially look like, for example, the Lognormal distribution but have a Pareto or GPD tail. We design a classification of such spliced distributions, which embraces and generalises various existing approaches. Special attention is paid to the geometry of distribution functions and to intuitive interpretations of the parameters, which can ease parameter inference from scarce data. The developed framework gives also new insights into the old Riebesell (power curve) exposure rating method.
This paper presents a comprehensive analysis of the frequency and severity of accidents involving electric vehicles (EVs) in comparison to internal combustion engine vehicles (ICEVs). It draws on extensive data from Norway from 2020 to 2023, a period characterised by significant EV adoption. We examine over two million registered EVs that collectively account for 28 billion kilometres of travel. In total we have analysed 139 billion kilometres of travel and close to 14,0000 accidents across all fuel types. We supplement this data with data from the Highway Loss Data Institute in the US and Association of British Insurers data in the UK as well as information from the Guy Carpenter large loss motor database.
A thorough analysis comparing accident frequency and severity of EVs with ICEVs in the literature to date has yet to be conducted, which this paper aims to address. This research will assist actuaries and analysts across various domains, including pricing, reserving and reinsurance considerations.
Our findings reveal a notable reduction in the frequency of accidents across all fuel types over time. Specifically, EVs demonstrate a lower accident frequency compared to ICEVs, a trend that may be attributed more to advancements in technology rather than the inherent characteristics of the fuel type, even when adjusted for COVID. Furthermore, our analysis indicates that EVs experience fewer accidents involving single units relative to non-EV and suggests a decrease in driver error and superior performance on regular road types.
Reduction in EV accident frequency of 17% and a change in the distribution of average severity with higher damage costs and lower injury costs leading to an overall reduction of 11%
However, it is important to note that when accidents do occur, the number of units involved as a proxy for severity involving EVs is marginally higher than those involving ICEVs. The average claim cost profile for EVs changes significantly with property damage claims being more expensive and bodily injury claims being less expensive for EVs.
Overall, our research concludes that EVs present a lower risk profile compared to their ICEV counterparts, highlighting the evolving landscape of vehicle safety in the context of increasing EV utilisation.