To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is concerned with forecasting. Forecasts are required for two basic reasons: the future is uncertain and the full impact of many decisions taken now is not felt until later. Consequently, accurate predictions of the future improve the efficiency of the decision-making process. For example, at the simplest level, in most countries weather forecasts are produced which are publicised by the media. These are of interest to the general public, farmers and travellers. If the weather was always the same from day to day, forecasts would not be produced. It is only because the weather changes that forecasting becomes relevant.
Most decisions are made with a view to influencing where one will be in the future: a firm builds a new factory to meet expected future demand for its product; workers decide to save part of their incomes in order to pay for holidays later in the year or to make provision for their retirement; a stock market investor buys some shares now in the hope of receiving a worthwhile return, in dividends and/or capital gains, in the future; a banker buys foreign currency on the forward market to reduce risks of losses from movements in exchange rates. All these activities require some idea or forecast of the future behaviour of key environmental variables so that an assessment can be made of what will happen if nothing is done now, and what is likely to happen if certain steps are taken.
In this chapter we consider the role of macroeconomic models in the forecasting process. The use of macroeconomic models has developed extensively since the 1960s. In the US private institutions have been able to obtain forecasts of the US economy based on the Wharton model since 1963 and for the UK the London Business School has been modelling the economy since 1966. Since that date the number of forecasts based on macroeconomic models has been the subject of considerable expansion and Fildes and Chrissanthaki (1988) suggest that over 100 agencies are involved in making macroeconomic forecasts for the UK. Many of these agencies utilise macroeconomic models for this purpose but, for commercial reasons, details of their forecasting methods are not published. In section 5.2 we examine the general nature of macroeconomic models, followed by the presentation of a simple illustrative model in section 5.3. In section 5.4 we demonstrate how forecasts are prepared using this simple model as an example and in section 5.5 consider the role of judgemental adjustments. Forecasts of the 1980–82 recession are examined in section 5.6 and the decomposition of forecast errors in section 5.7. In section 5.8 we discuss the accuracy of macroeconomic forecasts and our conclusions are presented in 5.9.
Nature of macroeconomic models
Essentially a macroeconomic model is an attempt to describe an economy. This description may be presented in verbal form, diagrammatic form or in the form of mathematical equations. Standard macroeconomic texts are firmly based in the first two approaches with some use of mathematics.
In recent years there have been extensive developments in the methods used in economic forecasting. This book presents an introduction to these methods for advanced undergraduate and postgraduate students of economics. We assume a background of intermediate economics and introductory econometrics. Part 1 is concerned with techniques. In chapter 1, a general introduction to forecasting methods is presented, and in chapter 2 recent advances in time-series methods are reviewed. Ways of combining forecasts are discussed in chapter 3. Part 2 is concerned with applications of these techniques in microeconomics (chapter 4), macroeconomics (chapter 5) and asset markets (chapter 6). The concluding chapter brings together some of the earlier results.
We are grateful to Peter Stoney, University of Liverpool, Paul de Grauwe, Catholic University of Leuven, David Byers, Michael Cain and David Law, University College, Aberystwyth for their comments on parts of the manuscript. Any errors and omissions are the responsibility of the authors. We acknowledge financial assistance from the Economic and Social Research Council (Grant B01250033) and Liverpool Polytechnic.
We have seen that there are several different methods of producing forecasts and also that, even when a particular method is selected, a forecaster still has to choose such things as the variables of interest, the functional form and the estimation procedure. As a result, we frequently have several, generally different, forecasts available for the same outcome. The question then is should just one particular forecast be chosen or should some form of average be taken? This has received much attention in the academic literature over the last few years. In this chapter we consider the main themes of this literature as well as reviewing some of the empirical contributions. In section 3.2 we determine the optimal way in which two unbiased forecasts can be combined. The weights are shown to depend on the variances and covariances of the forecast errors. A generalisation of this, which extends to include biased forecasts, is presented in section 3.3, and the problems caused by serially correlated errors are discussed in section 3.4. Other approaches to combining forecasts, including the use of the simple average are considered in section 3.5. The empirical evidence on how different combinations perform is reviewed in section 3.6 and some practical suggestions for deciding how to choose an appropriate combination are offered. We then consider the results of the Makridakis forecasting competition which compares a wide range of time-series forecasts, as well as some simple combinations.
In this chapter we are concerned with methods of forecasting in the microeconomic environment. That is, at the level of a particular firm or industry rather than with the whole economy (which is the subject of chapter 5). In chapters 2 and 3 we saw various time-series methods which can be applied in any circumstances, given the required data. They are basically non-causal extrapolative methods. Here the approach adopted is to assume that a causal model will help in understanding behaviour and so will produce accurate forecasts. Whether it is worthwhile constructing such a model depends on the costs and benefits of the forecasting process. The costs can generally be estimated but the benefits are more difficult to assess. This is particularly the case when the choice is between a virtually costless naive model, say, a random walk or simple extrapolation model, and a complex causal model which is expected, ex ante, to give more accurate forecasts, but requires much judgement by the modeller.
We take as our starting point demand analysis, in which the theory of consumer behaviour is used to model how individuals behave in a perfectly competitive market. By making simplifying assumptions it is possible, in principle, to construct an economic model which attempts to explain demand in terms of observed prices, incomes, advertising and other variables. With further assumptions it is possible to make forecasts of their future values.
Macroeconomic forecasts are traditionally stated as point estimates. Retrospective evaluations of forecasts usually assume that the cost of a forecast error increases with the arithmetic magnitude of the error. As a result, measures such as the root-mean-square error (RMSE) or the mean absolute error (MAE) are most often used to summarize forecast performance.
For many users, however, the traditional approach may not correspond with their own uses and evaluations of macro forecasts. The premise of this chapter is that it could be valuable for many users to accurately predict the stage of the business cycle several quarters ahead. For example, a government policymaker accountable to the electorate might well want the economy to be expanding in the quarter before an election; the actual levels of real GNP and other variables would be of secondary importance. Another example is that a producer of capital goods might expect two quite different sales levels to be associated with a particular level of real GNP, depending on whether the economy is expanding or contracting.
In short, when a variable such as real GNP is predicted, the relevant loss function may not be a simple linear or quadratic function of the forecast error. This chapter therefore proposes a measure to supplement the traditional summaries of forecast errors. The new measure attempts to capture the extent to which the stage of the business cycle is accurately predicted.
They system of leading indicators is perhaps the least theoretical of forecasting tools. It began as a purely statistical classification of the 487 economic time-series that the National Bureau of Economic Research had in its data bank as of 1937, in response to the concern of the administration over recovery from the 1937–8 recession (Mitchell and Burns, 1938). That project produced a list of leading indicators, but not an index; the index was based on later analysis of a much more extensive bank of series (Moore, 1950, 1955; especially 1955, pp. 69–71). The National Bureau pioneers were well aware of leading business cycle theories, but the theories did not influence their procedures for classifying time-series as leading, coincident, or lagging.
This chapter is an attempt to supply a theoretical basis for leading indicators. Readers may well ask, why bother? After all, the index of leading indicators maintains its standing as a forecasting tool very successfully, in spite of the enormous amounts of time and resources invested in competitors – in sophisticated new methods of time-series analysis and in large and small econometric models that claim to have solid theoretical foundations. If the index works, why not just use it?
The answer that Koopmans (1947) gave in his celebrated critique “Measurement Without Theory” was essentially that the atheoretical National Bureau approach (including, but not restricted to, leading indicators) could never lead to inferences about the probable effects of stabilization policies.
In Australia the leading and coincident indexes of economic activity, computed monthly by the Columbia University-based Center for International Business Cycle Research (CIBCR) in collaboration with Dr. Ernest Boehm of Melbourne University, have been gaining increasing prominence in economic debate. Movements in the indexes are the subject of many newspaper articles, and government and business economists alike use them to gauge the performance of the macroeconomy.
In this chapter several applications of the use of these two important economic indexes are discussed. Firstly, the degree to which movements in the leading index reliably anticipate fluctuations in the coincident index is investigated. Secondly, the usefulness of the leading index in forecasting fluctuations in telecommunications traffic is statistically evaluated. Finally, the Australian and U.S. coincident indexes are used to clarify the question of whether there is any empirical support for the view that Australia's economy is systematically led by cyclical fluctuations in the U.S. economy.
A statistical analysis of the relationship between the leading and coincident indexes
In this section the results of two related statistical analyses are presented. The relationship between the two indexes is first investigated using the technique of cross-spectral analysis; subsequently, a Granger causality analysis is conducted on the two series. [Full details of these two studies may be found in Layton (1986a, 1987a).]
The cross-spectral analysis
The choice of the technique of cross-spectral analysis is motivated by the possibility that the strength of the association between the two indexes is likely to vary according to whether swings in the leading index are short term or long term in nature.
In the ongoing effort to utilize and improve the forecasting properties of leading indicators, analysts on both sides of the Atlantic and Pacific are increasingly combining quantitative indicators of the sort pioneered by Arthur F. Burns and Wesley C. Mitchell with qualitative survey data. We have in the past considered the forecasting usefulness of a number of surveys, including the surveys conducted by the European Economic Community, the Confederation of British Industry in the United Kingdom, Dun and Bradstreet, Inc., and the Michigan Survey Research Center in the United States. In a paper we presented at the September 1985 meeting of CIRET in Vienna, we explored some of the forecasting properties of the price surveys conducted by the National Association of Purchasing Management (NAPM) in the United States (Moore and Klein, 1985). One of the unique features of this survey is that it reports buying prices rather than selling prices, and we examined some of the relationships between this survey and measures of price fluctuations.
The preliminary work with the NAPM data proved so promising that we here concentrate on this source and develop the analysis not only of prices but also of other leading indicators, namely, new orders, inventory change, and vendor performance. In each case we shall compare the turning points in the NAPM series with the U.S. business cycle chronology as well as with comparable quantitative series. Correlation analyses will also be used. In this way we can evaluate the overall usefulness for forecasting of a data set that we believe has been underutilized thus far.
A broadly based agreement exists among students of economic conditions that inventories play an important role in the periodic economic fluctuations from boom to bust and back again to boom. That finding by economists should not be praised as an extraordinary piece of brain work, however, because every manufacturer and trader, plus various functionaries, such as business managers and purchasing agents, and even sales clerks, can independently come to the same conclusion by observing the production, sales, and inventory records. In fact, the inventory data are collected with the help of those production and sales agents. The economists may have differences of opinion about how to best use and evaluate the inventory data so collected. This chapter, in effect, is concerned with the use of various inventory data for evaluation of current and future economic conditions.
Inventory data as a business cycle indicator
The application of inventory data may create misunderstanding when one does not carefully distinguish between flow and stock concepts relating to inventories. The flows and stocks have different timing sequences. To begin with, the flows are equivalent to changes and lead the cyclical timing in the stocks or aggregates (also referred to as levels). The accumulation of inventory stock has normally slowed down by the time the economic activity reaches a peak because the inventory flow (called inventory investment or buildup) has reached its peak some time prior to the cyclical peak and has been slowing down from that time point on.
Since their initial development in 1938 by Wesley Mitchell, Arthur Burns, and their colleagues at the National Bureau of Economic Research, the Composite Indexes of Coincident and Leading Economic Indicators have played an important role in summarizing the state of macroeconomic activity. This chapter reconsiders the problem of constructing an index of coincident indicators. We will use the techniques of modern time-series analysis to develop an explicit probability model of the four coincident variables that make up the Index of Coincident Economic Indicators (CEI) currently compiled by the Department of Commerce (DOC). This probability model provides a framework for computing an alternative coincident index. As it turns out, this alternative index is quantitatively similar to the DOC index. Thus this probability model provides a formal statistical rationalization for, and interpretation of, the construction of the DOC CEI. This alternative interpretation complements that provided by the methodology developed by Mitchell and Burns (1938) and applied by, for example, Zarnowitz and Boschan (1975).
The model adopted in this chapter is based on the notion that the comovements in many macroeconomic variables have a common element that can be captured by a single underlying, unobserved variable. In the abstract, this variable represents the general “state of the economy.” The problem is to estimate the current state of the economy, that is, this common element in the fluctuations of key aggregate time-series variables. This unobserved variable – the state of the economy – must be defined before any attempt can be made to estimate it.
There is a practical reason for being interested in consensus forecasts, since they are more accurate than most predictions and will sometimes have a better track record than virtually all forecasting systems based on individual records or parsimonious models (Moore, 1969, McNees, 1987). In this chapter I illustrate the superiority of a consensus approach by using a diffusion index and the downness properties of other leading economic indicators to find the right ball park for real GNP forecasts and improve our ability to identify economic recessions before their occurrence.
Finding the right ball park
The starting point for what is hoped will be better forecasts is the discovery that the distribution of the average annual growth rates for real GNP has been trimodal. Since 1948 there have been eight instances when the GNP growth rate was zero or negative; seventeen instances when the growth rate was in the 1.7 to 4.1 percent range; and fourteen years when the growth rate was in the 4.7 to 10.3 percent range. The gaps between these poor, mediocre, and super growth rate distributions can probably be attributed to interactions between the multiplier and accelerator principles.
From 1955 to 1980 the average annual percentage change in real gross private fixed domestic investment was approximately equal to three times the average growth rate for real GNP minus six percentage points. This accelerator relationship implies that real GNP must increase at an average rate of about 2 percent per year just to keep investment from falling and having a deleterious feedback effect on the rest of the economy.