To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Learn by doing with this user-friendly introduction to time series data analysis in R. This book explores the intricacies of managing and cleaning time series data of different sizes, scales and granularity, data preparation for analysis and visualization, and different approaches to classical and machine learning time series modeling and forecasting. A range of pedagogical features support students, including end-of-chapter exercises, problems, quizzes and case studies. The case studies are designed to stretch the learner, introducing larger data sets, enhanced data management skills, and R packages and functions appropriate for real-world data analysis. On top of providing commented R programs and data sets, the book's companion website offers extra case studies, lecture slides, videos and exercise solutions. Accessible to those with a basic background in statistics and probability, this is an ideal hands-on text for undergraduate and graduate students, as well as researchers in data-rich disciplines
An engaging, comprehensive, richly illustrated textbook about the atmospheric general circulation, written by leading researchers in the field. The book elucidates the pervasive role of atmospheric dynamics in the Earth System, interprets the structure and evolution of atmospheric motions across a range of space and time scales in terms of fundamental theoretical principles, and includes relevant historical background and tutorials on research methodology. The book includes over 300 exercises and is accompanied by extensive online resources, including solutions manuals, an animations library, and an introduction to online visualization and analysis tools. This textbook is suitable as a textbook for advanced undergraduate and graduate level courses in atmospheric sciences and geosciences curricula and as a reference textbook for researchers.
Economic regulation affects us all, shaping how we access essential services such as water, energy and transport, as well as how we communicate with one another in the digital world. Modern Economic Regulation describes the core insights of economic theory on which regulatory policies are based and connects this with evidence of how regulation is applied. It focuses on fundamental questions such as: why are certain industries regulated? What principles can inform regulation? How is regulation implemented? Which regulatory policies have been more, or less, effective in practice? All chapters in this second edition are fully updated to reflect the latest research and evidence, while five new chapters cover behavioural economics and the regulation of rail, aviation, payment systems and digital platforms. Each chapter contains discussion questions and topical case studies, and online materials include over 60 applied exercises that explore real-life regulatory problems from around the world.
Chapter 2: Linearly independent lists of vectors that span a vector space are of special importance. They provide a bridge between the abstract world of vector spaces and the concrete world of matrices. They permit us to define the dimension of a vector space and motivate the concept of matrix similarity.
Given the assumption that a loss random variable has a certain parametric distribution, the empirical analysis of the properties of the loss requires the parameters to be estimated. In this chapter, we review the theory of parametric estimation, including the properties of an estimator and the concepts of point estimation, interval estimation, unbiasedness, consistency and efficiency. Apart from the parametric approach, we may also estimate the distribution functions and the probability (density) functions of the loss random variables directly without assuming a certain parametric form.
Exact solutions for infinite Ising systems are rare, specific in terms of the interactions allowed, and limited to one and two dimensions. To study a wider range of models we must resort to various approximation techniques. One of the simplest and most comprehensive of these is the mean-field approximation, the subject of this chapter. Some versions of this approximation rely on a self-consistent requirement, and in this respect the mean-field method for the Ising model is similar to a number of other self-consistent approximation methods in physics, including the Hartree–Fock approximation for atomic and molecular orbitals, the BCS theory of superconductivity, and the relaxation method for determining electric potentials. We will also introduce a somewhat different mean-field approach, the Landau–Ginzburg approximation, which is based on a series expansion of the free energy. One of the drawbacks of all of the mean-field theories, however, is that they predict the same mean-field critical exponents, which, unfortunately, are at odds with the results of exact solutions and experiments.
Macro-causal analysis pivots between exploring patterns of change and exploring causal patterns. It employs a bounded conception of history, keeping causal factors constant, but expands the length of causal chains to explore how their temporal order affects causal outcomes. It analyzes this causal order in terms of elements of physical time: timing, sequencing, tempo, and duration. In paying attention to these elements, it in effect unfreezes physical time, which defines the existing linear notions of causality. Macro-causal analysis relaxes these linearity assumptions by expanding the analysis from what Pierson called short–short to short–long, long–short, and long–long explanations. It thus recognizes that theories rest on temporal assumptions and that understanding those assumptions invites exploring backgrounded causal factors that can help update theories. This udpating process is abduction, which ultimately makes hypotheses more testworthy. Besides elongating causal chains, this type of analysis also elongates outcomes by paying attention to a range of near-miss outcomse that are frequently overlooked but provide often important new inductive insights.
Ratemaking refers to the determination of the premium rates to cover the potential loss payments incurred under an insurance policy. In addition to the losses, the premium should also cover all the expenses as well as the profit margin. As past losses are used to project future losses, care must be taken to adjust for potential increases in the lost costs. There are two methods to determine the premium rates: the loss cost method and the loss ratio method.
Chapter 1: In this chapter, we provide formal definitions of real and complex vector spaces, and many examples. Among the important concepts introduced are linear combinations, span, linear independence, and linear dependence.
Having discussed models for claim frequency and claim severity separately, we now turn our attention to modeling the aggregate loss of a block of insurance policies. Much of the time we shall use the terms aggregate loss and aggregate claim interchangeably, although we recognize the difference between them as discussed in the last chapter. There are two major approaches in modeling aggregate loss: the individual risk model and the collective risk model.
Chapter 6: In this chapter, we explore the role of orthonormal (orthogonal and normalized) vectors in an inner-product space. Matrix representations of linear transformations with respect to orthonormal bases are of particular importance. They are associated with the notion of an adjoint transformation. We give a brief introduction to Fourier series that highlights the orthogonality properties of sine and cosine functions. In the final section of the chapter, we discuss orthogonal polynomials and the remarkable numerical integration rules associated with them.
After a model has been estimated, we have to evaluate it to ascertain that the assumptions applied are acceptable and supported by the data. This should be done prior to using the model for prediction and pricing. Model evaluation can be done using graphical methods, as well as formal misspecification tests and diagnostic checks.
We live in a constantly changing world, which requires reformulating our research questions. CHA is particularly well suited for describing and redescribing this changing and ultimately historical world. It emerged independently in many different disciplines, and thus is spoken many different vernaculars. This book extracts from those vernaculars a common grammar of time, to place CHA on more systematic methodological foundations. The introduction previews key elements of CHA: its reliance on historical thinking to explore social reality; its deployment of a rich temporal vocabulary; and three distinct strands of CHA, each making its own distinct contributions.