To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter considers dynamic programming in continuous time when the state space is finite. The chapter begins with a treatment of linear dynamics in vector space and then identifies lifetime values with integrals over discounted expected rewards. The optimality theory uses a discrete version of the Hamilton-Jacobi-Bellman equation. The material serves as a jump-off point for learning continuous time optimization.
This chapter introduces order theory and gives a more detailed treatment of numerical methods. It also discusses the connection between matrices and linear operators.
Dynamic programming (DP) is a sub-field of optimization concerned with sequential decision making over time. The essential ideas of DP have been adopted in many applications, from robotics and AI to the sequencing of DNA. It is used around the world to control aircraft, route shipping, test products, recommend information on media platforms and solve major research problems. Dynamic Programming: Finite States treats the theory of dynamic programming and its applications in economics, finance, and operations research. It contains classical results on dynamic programming as well as extensions created by researchers and practitioners as they wrestle with formulating and solving dynamic models that can explain patterns observed in data. Adopting an abstract framework that provides great generality, this book facilitates rapid progress to the research frontier by combining rigorous theory with numerous applications, many solved exercises, and detailed open-source computer code.
This book studies the methodological revolution that has resulted in economists' mathematical market models being exported across the social sciences. The ensuing process of economics imperialism has struck fear into subject specialists worried that their disciplinary knowledge will subsequently count for less. Yet even though mathematical market models facilitate important abstract thought experiments, they are no substitute for carefully contextualised empirical investigations of real social phenomena. The two exist on completely different ontological planes, producing very different types of explanation.
In this deeply researched and wide-ranging intellectual history, Matthew Watson surveys the evolution of modern economics and its modelling methodology. With its origins in Jevons and Robbins and its culmination in Samuelson, Arrow and Debreu, he charts the escape from reality that has allowed economists' hypothetical mathematical models to speak to increasingly self-referential mathematical truths. These are shown to perform badly as social truths, consequently imposing strict epistemic limits on economics imperialism.
The book is a formidable analysis of the epistemic limitations of modern-day economics and marks a significant counter to its methodology's encroachment across the wider social sciences.
This Element presents the κ-generalized distribution, a statistical model tailored for the analysis of income distribution. Developed over years of collaborative, multidisciplinary research, it clarifies the statistical properties of the model, assesses its empirical validity and compares its effectiveness with other parametric models. It also presents formulas for calculating inequality indices within the κ-generalized framework, including the widely used Gini coefficient and the relatively lesser-known Zanardi index of Lorenz curve asymmetry. Through empirical illustrations, the Element criticizes the conventional application of the Gini index, pointing out its inadequacy in capturing the full spectrum of inequality characteristics. Instead, it advocates the adoption of the Zanardi index, accentuating its ability to capture the inherent heterogeneity and asymmetry in income distributions.
This chapter reviews alternative methods for estimating the integrated covariance matrix (ICM) using high-frequency data and their properties. The high-frequency data are assumed to come from a continuous-time model. The alternative estimators are justified by their asymptotic properties under the infill asymptotic scheme, which requires that the time interval Δ between any two consecutive observations go to zero. When reviewing the methods, we separate the methods that assume the dimension of the ICM is fixed and those that assume the dimension of the ICM goes to infinity with the sample size. Comparisons of the performances of alternative ICM estimators in portfolio choice are discussed.
In the presence of bubbles, asset prices consist of a fundamental and a component, with the bubble component following an explosive dynamic. The general idea for bubble identification is to apply explosive root tests to a proxy of the unobservable bubble. This chapter provides a theoretical framework that incorporates several definitions of bubbles (and fundamentals) and offers guidance for selecting proxies. For explosive root tests, we introduce the recursive evolving test of Phillips, Shi, and Yu (2015a,b) along with its asymptotic properties. This procedure can serve as a real-time monitoring device and has been shown to outperform several other tests. Like all other recursive testing procedures, the PSY algorithm faces the issue of multiplicity in testing. We propose a multiple-testing algorithm to determine appropriate test critical values and show its satisfactory performance in finite samples by simulations. To illustrate, we conduct a pseudo real-time bubble monitoring exercise in the S&P 500 stock market from January 1990 to June 2020. The empirical results reveal the importance of using a good proxy for bubbles and addressing the multiplicity issue.
This chapter provides an overview of posterior-based specification testing methods and model selection criteria that have been developed in recent years. For the specification testing methods, the first method is the posterior-based version of IOSA test. The second method is motivated by the power enhancement technique. For the model selection criteria, we first review the deviance information criterion (DIC). We discuss its asymptotic justification and shed light on the circumstances in which DIC fails to work. One practically relevant circumstance is when there are latent variables that are treated as parameters. Another important circumstance is when the candidate model is misspecified. We then review DICL for latent variable models and DICM for misspecified models.
This chapter reviews alternative methods proposed in the literature for estimating discrete-time stochastic volatility models and illustrates the details of their application. The methods reviewed are classified as either frequentist or Bayesian. The methods in the frequentist class include generalized method of moments, quasi-maximum likelihood, empirical characteristic function, efficient method of moments, and simulated maximum likelihood based on Laplace-based importance sampler. The Bayesian methods include single-move Markov chain Monte Carlo, multimove Markov chain Monte Carlo, and sequential Monte Carlo.
Limit theory is developed for least squares regression estimation of a model involving time trend polynomials and a moving average error process with a unit root. Models with these features can arise from data manipulation such as overdifferencing and model features such as the presence of multicointegration. The impact of such features on the asymptotic equivalence of least squares and generalized least squares is considered. Problems of rank deficiency that are induced asymptotically by the presence of time polynomials in the regression are also studied, focusing on the impact that singularities have on hypothesis testing using Wald statistics and matrix normalization. The chapter is largely pedagogical but contains new results, notational innovations, and procedures for dealing with rank deficiency that are useful in cases of wider applicability.
Continuous-time models have found broad applications in many core areas of economics and finance. This chapter first briefly introduces the applications of the continuous-time models for modeling the dynamics of the short-term interest rates. While many estimation methods have been proposed to estimate continuous-time models with discrete samples over the past 40 years, almost all suffer from finite-sample bias. The bias problem is particularly severe for the mean-reversion parameter, which measures the persistence level of the interest-rate process. Moreover, such bias propagates and leads to considerable bias in price calculations of the interest-rate contingent claims, such as bonds and bond options. The focus of this chapter is to give a detailed review of the bias issue. Two bias-correction methods are discussed: the jackknife method and the indirect inference method, which can effectively reduce the estimation bias of the mean-reversion parameter and the bias in pricing contingent claims. Monte Carlo studies are provided to illustrate the characteristics of the bias and investigate the performance of the two bias-correction methods.
Fractional Brownian motion is a continuous-time zero mean Gaussian process with stationary increments. It has gained much attention in empirical finance and asset pricing. For example, it has been used to model the time series of volatility and interest rates. This chapter first introduces the basic properties of fractional Brownian motions and then reviews the statistical models driven by the fractional Brownian motions that have been used in financial econometrics such as the fractional Ornstein–Uhlenbeck model and the fractional stochastic volatility models. We also review the parameter estimation methods proposed in the literature. These methods are based on either continuous-time observations or discrete-time observations.
This chapter discusses the nonstationary continuous-time models, including unit root and explosive regressors. The contents cover estimation methods, inferential theory, and empirical examples demonstrating the use of these models. It starts with a univariate framework and extends to multivariate cases for generality.