To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Jan Tinbergen built and estimated the first macrodynamic model of the business cycle in 1936. Amongst all the econometricians of his day, Tinbergen was ideally suited for such a task. He had been experimenting with small-scale models of the trade cycle since the late 1920s and was well versed in the ways of dynamic models. He also had a wide knowledge of quantitative business cycle research from his experience as Editor of De Nederlandsche Conjunctuur (the Dutch statistical business cycle journal). Yet, even for one so well qualified, it was a formidable undertaking, for there was a considerable jump from putting together a small cycle model to constructing an econometric model of the business cycle covering the whole economy, and Tinbergen was well aware that the difficulties were not merely due to the difference in scale.
Tinbergen had already given considerable thought to the problems involved in a 1935 survey of econometric research on business cycles commissioned for Econometrica. He had taken as his starting point Frisch's (1933) idea that a business cycle model should consist of two elements, an economic mechanism (the macrosystem):
This system of relations defines the structure of the economic community to be considered in our theory
(Tinbergen (1935), p. 242)
and the outside influences or shocks. But, as Tinbergen had pointed out, this was only a basic design for an econometric model; the scope of Frisch's new term ‘macrodynamics’ was unclear.
Econometrics was regarded by its first practitioners as a creative synthesis of theory and evidence, with which almost anything and everything could, it seems, be achieved: new economic laws might be discovered and new economic theories developed, as well as old laws measured and existing theories put to the test. This optimism was based on an extraordinary faith in quantitative techniques and the belief that econometrics bore the hallmarks of a genuinely scientific form of applied economics. In the first place, the econometric approach was not primarily an empirical one: econometricians firmly believed that economic theory played an essential part in finding out about the world. But to see how the world really worked, theory had to be applied; and their statistical evidence boasted all the right scientific credentials: the data were numerous, numerical and as near as possible objective. Finally, econometricians depended on an analytical method based on the latest advances in statistical techniques. These new statistical methods were particularly important for they gave economists of the early twentieth century ways of finding out about the world which had been unavailable to their nineteenth-century forebears, ways which, in themselves, seemed to guarantee scientific respectability for econometrics.
So, when econometrics emerged as a distinct activity at the beginning of the twentieth century, its use of statistical methods and data to measure and reveal the relationships of economic theory offered a form of investigation strikingly different from those of nineteenth-century economics, which ranged from the personal introspection and casual observation of classical economics to the detailed empiricism of historical economics.
Haavelmo's 1944 paper marks the end of the formative years in econometrics and the beginning of its mature period. His treatise set out the best methods for the practice of econometrics and explained the reasoning behind these rules. His novel ideas on the role of probability pointed the future way for econometrics, but in many other respects the concepts and approach of Haavelmo's programme were firmly rooted in the past. In his hands, the individual practical solutions and insights generated by the earlier work were finally fitted together as in a completed jigsaw puzzle, showing one single econometrics applicable to all branches of economics.
The coherence of Haavelmo's blueprint derived from a deep understanding of how econometrics worked as an applied science. His paper contained the most explicit and rewarding discussion in econometrics about its most fundamental problem: the problem of non-experimental data that come from
the stream of experiments that Nature is steadily turning out from her own enormous laboratory, and which we merely watch as passive observers.
(Haavelmo (1944), p. 14)
Clearly, econometricians could not isolate or control such data to match their theory, as in the ideal type of experiment – and here I return to the themes of my Introduction. Haavelmo suggested an alternative form of experimental design, in which economic theory could be constructed to meet the conditions of the data. But, for practical econometrics, he recognised that elements of both design types would be required.
The evolution of formal models of the data-theory relationship is complex and intertwined with other important issues in theoretical econometrics such as identification, simultaneity and causality. The tale of how these models developed is recounted through a series of letters which I imagine to have been written by econometricians of the period 1900 to 1950. These letters provide both a synthesis of the history of the ideas involved and a rather personal internal account. The imaginary authors of these letters (named with Greek letters) represent the composite views of a number of econometricians; none consistently represents the views of any single writer. Following Lakatos (1976), the actual history (authors, dates and sources) is told in the notes following each letter. These notes are brief, for most of the literature and ideas involved either have already been discussed in the context of the applied work or will be discussed in detail in the final chapter. Although some letters bear a close resemblance to articles cited in the footnotes, others elucidate less explicit views. The growing formality of the model representations in the letters, and the increasingly technical nature of the discussions, also reflect the real literature.
The chapter is divided into two major parts and a substantive postscript.
The nineteenth-century economists did not, for the most part, recognise the idea or existence of regular cycles in economic activity. Instead, they thought in terms of ‘crises’, a word implying an abnormal level of activity and used in several different ways: it could mean a financial panic (that is the peak turning point in a commercial or financial market) or it could mean the period of deepest depression when factories closed. A financial crisis was not necessarily followed by an economic depression, but might be a phenomenon solely of the financial markets. There were a few exceptions amongst economists and industrial commentators who recognised a cyclical pattern in economic activity, including, for example, Marx. In addition, there was no agreed theory about what caused a crisis, indeed it sometimes seemed that not only each economist but each person had their own pet theory of the cause of crises in the economy. One telling example, ‘The First Annual Report of the Commissioner of Labor’ in the USA in 1886 listed all the causes of depressions reported to the various committees of Congress. This list ran to four pages, from
Administration, changes in the policies of
Agitators, undue influence of
through various economic and institutional reasons to
War, absorption of capital by destruction of property during
Work, piece.
There was not much chance of making sense of so many causes using the standard ways of gathering evidence in support of theories in nineteenth-century economics.
The first attempt at measuring a demand function was probably by Charles Davenant in 1699 (but generally attributed as Gregory King's Law) and consisted of a simple schedule of prices and quantities of wheat. By the late nineteenth century, simple comparisons had given way to mathematical formulations of such demand laws. For example, Aldrich (1987) discusses how Jevons (1871) fitted ‘by inspection’ an inverse quadratic function to the six data points of Gregory King's Law. Wicksteed (1889) disagreed with Jevons' function and found that a cubic equation fitted the data exactly. Although this sort of function fitting was possible with six data points, it was obviously inappropriate for large data sets, and by the beginning of the twentieth century investigators had turned to the field of statistics for help in measuring demand relationships.
It quickly became clear that simply applying statistical methods to the price and quantity data did not necessarily lead to sensible results and the work of econometricians in the first two decades of the twentieth century reveals both insight and confusion about why this should be so, as we shall see in the first section of the chapter. The ways in which econometricians formulated the difficulties they faced and then tried to solve them first by adjusting their data and then by developing more complex econometric models of demand are discussed in the following two sections. Other aspects of the difficulties raised in this early work are held over for discussion until Chapter 6.
A ‘probabilistic revolution’ occurred in econometrics with the publication of Trygve Haavelmo's ‘The Probability Approach in Econometrics’ in 1944. It may seem strange that this ‘revolution’ should have been delayed until mid-century for, from the early days of the twentieth century, economists had been using statistical methods to measure and verify the relationships of economic theory. But, even though these early econometricians used statistical methods, they believed that probability theory was not applicable to economic data. Here lies the contradiction: the theoretical basis for statistical inference lies in probability theory and economists used statistical methods, yet they rejected probability. An examination of this paradox is essential in order to understand the revolutionary aspects of Haavelmo's work in econometrics.
At the beginning of the century, applied economists believed that there were real and constant laws of economic behaviour waiting to be uncovered by the economic scientist. As we have seen, this early econometrics consisted of two sorts of activity depending on the status of the theory concerned and the type of law to be uncovered. Where a well-defined and generally agreed theory existed, as in the work on demand, the role of statistical methods was to measure the parameters or constants of the laws.
The idea that price varies negatively with quantity demanded and positively with quantity supplied is a long-established one, although Hutchison (1953) has suggested that classical economists' ideas on supply and demand schedules were neither well defined nor consistent. Nevertheless, or perhaps because of this fuzziness, thr desire to make economics more scientific (both to express the theories more exactly and to provide a stronger empirically based knowledge) found expression particularly early in the field of demand. It was one of the first areas of economic theory to receive graphical and mathematical representation. This is generally believed to have been at the hands of Cournot in 1838, although his contributions to the development of economics were not appreciated until later in the century. The Victorian polymath Fleeming Jenkin developed the mathematical and geometric treatment of demand and supply further in a series of articles between 1868 and 1871. He even included variables to represent the other factors which cause the demand curve to shift back and forth. Although the use of mathematics was resisted at the time, it gradually became more acceptable since graphs and equations were good media in which to display the new ‘marginal’ theory.
The ‘marginal revolution’ of the 1870s is usually portrayed as changing the basis of the theory of value from the classical concentration on the production side to a new analysis based on the individual consumer. Blaug (1968) has described how this theory developed along two paths corresponding to the ideas of Marshall and Walras.
Clément Juglar, Wesley Clair Mitchell and Warren M. Persons used statistics in a different fashion from Jevons and Moore. These three economists believed that numerical evidence should be used on a large scale because it was better than qualitative evidence, but their use of data was empirical in the sense that they did not use statistical features of the data as a basis for building cycle theories. Further generalisations are difficult, because their aims differed: Juglar wanted to provide a convincing explanation for the cycle, Mitchell sought an empirical definition of the cyclical phenomenon, whilst Persons aimed to provide a representation of the cycle. The chapter begins with a brief discussion of Juglar's work in the late nineteenth century, and then deals at greater length with the work of Mitchell and Persons, whose empirical programmes dominated statistical business cycle research in the 1920s and 1930s.
The applied work discussed here grew in parallel to that described in Chapter 1, for Juglar was a contemporary of Jevons, and Mitchell and Persons overlapped with Moore. The developments that the three economists, Juglar, Mitchell and Persons, pioneered came to be labelled ‘statistical’ or ‘quantitative’ economics but never bore the tag of ‘econometrics’ as Moore's work has done.
The introduction to this book suggested that econometrics developed as a substitute for the experimental method in economics, and that the problems which arose were connected with the fact that most economic data had been generated in circumstances which were neither controlled nor repeatable. By using statistical methods, econometricians also obtained access to an experimental tradition in statistics. This tradition consisted of generating data artificially, under theoretically known and controlled conditions, to provide a standard for comparison with empirical data or to investigate the behaviour of data processes under certain conditions. Such experiments now form a significant part of the work in econometric theory (under the title Monte Carlo experiments) and are sometimes used in applied work (model simulations), but their use dates from the early years of econometrics. Experiments played a particularly important role in the work of the 1920s which is discussed here.
This chapter deals with technical issues of data analysis and associated explanations of the generation of economic cycle data as investigated by Yule, Slutsky and Frisch. The account begins with the work of Eugen Slutsky and George Udny Yule in the 1920s both of whom made considerable use of statistical experiments. Yule criticised the methods of analysing time-series data described in the previous two chapters. Slutsky explored the role of random shocks in generating cyclical patterns of data.
When I first began to research the history of econometrics in 1979, it was a fairly common assumption amongst economists that econometrics had no past, no history, before the 1950s. I was glad to find this was not so, and my pages were not to remain blank. But, instead of the decorative collection of antique notions which might have been expected, I had the excitement of discovering that pre-1950 econometrics was bristling with interesting people and functional ideas. No wonder, for it was during this early period that the fundamental concepts and notions of the econometric approach were thought out. This book is the history of those ideas.
Since there was little in the way of existing literature on the development of econometrics, I have relied on the help of many people. First, I should like to thank those pioneer econometricians who patiently answered my questions about what they were doing and thinking anything up to 50 years, or even more, ago. Amongst the founding fathers of the subject, I was able to talk to Trygve Haavelmo, Herman Wold, Richard Stone, Jan Tinbergen, Olav Reiersøl, George Kuznets, and the late Tjalling Koopmans, Sewall Wright and Holbrook Working. All helped me in various ways: they corrected some of my misapprehensions and filled in the gaps in my understanding in the way that only those with personal experience of the events and their time could do.