To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The selection of one preferred policy package from a set of candidates involves the use of at least an implicit criterion. In policy studies, specifically those using optimal-control techniques, this criterion is made explicit. Yet a precise and realistic specification of the parameters defining the derivatives of an explicit-criterion function is extremely difficult. This difficulty is compounded by the fact that policy-makers have been reluctant, and usually unable, to specify their relative priorities in advance of a planning exercise. Typically they need to get a feel for how much they can or cannot expect to achieve; they need to establish the trade-offs available between policy goals, and the likelihood of not being able to reach them, in order to set the limits to policy-making and acceptable compromise.
If it is hard for policy-makers to set relative priorities in explicit numerical form, it is certainly no easier for their advisory staff. As we have pointed out, the specified objectives will be expressed in the form of an approximation to some generally unknown, or incompletely known, collective-preference function. This preference function is unknown partly because policy-makers are unable to give a full description of their relative priorities for all possible events, but also because they cannot know in advance the preferences, bargaining strengths or solutions acceptable to the other decision-makers who influence economic decisions. If other decision-makers are involved, then to ignore their influences on the final decision might introduce serious errors.
Control methods have been used in economics – at least at the academic level – for about 35 years. Tustin (1953) was the first to spot a possible analogy between the control of industrial and engineering processes and post-war macroeconomic policy-making. Phillips (1954, 1957) took up the ideas of Tustin and attempted to make them more accessible to the economics profession by developing the use of feedback in the stabilisation of economic models. However, despite the interest of economists in the contribution of Phillips, the use of dynamics in applied and theoretical work was still relatively rudimentary. Developments in econometric modelling and the use of computer technology were still in their very early stages, so that there were practically no applications in the 1950s. However, matters began to change in the 1960s. Large-scale macroeconometric models were being developed and widely used for forecasting and policy analysis. Powerful computers were also becoming available. These developments plus important theoretical developments in control theory began to make the application of control concepts to economic systems a much more attractive proposition. Economists with the help of a number of control engineers began to encourage the use of control techniques.
Independent of the development of control techniques in engineering and attempts by Tustin and Phillips to find applications in economics, there were also a number of significant developments in economics and in particular in the theory of economic policy.
This book was started while the first author was at Imperial College, London, and the second author at the University of Rotterdam. It began with the objective of first covering the standard optimal-control theory – largely imported from mathematical control theory – and parallel developments in economics associated with Timbergen and Theil; then secondly to relate this work to recent developments in rational-expectations modelling. However, the rate of progress has been such that the original objectives have been revised in the light of many interesting developments in the theory of policy formulation under rational expectations, particularly those concerned with the issues of time inconsistency and international policy coordination. The first six chapters are devoted to various aspects of conventional control theory but with an emphasis on uncertainty. The remaining four chapters are devoted to the implications of rational, or forward-looking, expectations for optimal-control theory in economics. We also examine the increasing use which has been made of game-theoretic concepts in the policy-formulation literature and how this is relevant to understanding how expectations play a role in policy.
We owe many debts to colleagues past and present. A number of people have read and commented upon previous drafts of this book. In particular we are grateful to Andreas Bransma, Bob Corker, Berc Rustem, John Driffill, Sheri Markose, Mark Salmon and Paul Levine. We also owe a particular debt of gratitude to Francis Brooke, whose editorial patience was inexhaustible.
In chapter 9 we considered non-cooperative, full-information games; that is, competitive decision-making in which privately optimal solutions were sought by private agents. The macroeconomic policy-maker, who was also assumed to have full information, pursued his own optimal policy. The objective function lying behind this policy may have elements of Bergsonian social welfare, with policy-makers maximising the utility of the representative individual. But it may be that other considerations are also important here.
The information that is needed to be able to make an optimal choice can sometimes be very onerous. In competitive, atomistic markets, because agents can make their optimal choice solely on the basis of relative prices, the information needed to make decisions is comparatively small. At the other extreme of competitiveness, the monopolist needs only to know the slope of the demand curve facing him, as well as the vector of relative prices, to be able to make optimal decisions. The difficulties lie in between these two extreme cases where conditions of oligopoly obtain. The analysis of decision-making is much less straightforward, not because the usual marginal conditions for the optimal choice do not apply, but because an agent has to know how other agents will respond to what he does himself and then know how to respond in turn to this. The information requirements here are overwhelming.
There may be incomplete information not only about the performance of the economy but also about the intentions of both private agents and the government.
Reliability is obviously an important property for economic policies. Policy-makers can understand that economic policy-making requires a model, historical data, projections of exogenous events, and some kind of preference function – even if these quantities are specified informally rather than introduced into an explicit optimisation framework. But the extent of the uncertainty about future events, and about the accuracy of one's model as a representation of the economy, means that economic-policy formulation is seldom as straightforward as dynamic-optimisation procedures would suggest. The main difficulty with econometrically based policy recommendations is that they are not robust to shocks and that they seldom incorporate any real information about the form and extent of the risks faced. Indeed, the usual certainty-equivalent decision rules are invariant to risk. Thus, knowing that the above-mentioned four features of policy formulation are in fact uncertain, policy-makers often prefer to follow their own judgement. Moreover, policy-makers particularly dislike committing themselves to policies which they suspect may later need frequent and substantial revisions. If they can be convinced of the robustness of some particular alternative strategy, they may very well prefer to accept that strategy even if it promises fewer of the benefits offered potentially by other policies.
There is, therefore, a need to investigate whether the trade-off between a robust (or risk-sensitive) policy and one which is more ambitious is likely to be significant.
So far we have been concerned with the economic-policy aspects of linear models. But the majority of econometric models which are used for forecasting, policy simulation and optimisation in the economic-policy process are actually non-linear. In principle this can create a number of difficulties because the analytical results of linear systems are no longer available.
Solutions to the problem of designing economic policy when the model is non-linear have taken two basic forms:
(i) A linearisation approach by which the apparatus of linear theory is then applied to the linear approximation.
(ii) A direct approach by which the problem of finding the minimum of an objective function subject to the non-linear model is treated as non-linear constrained optimisation, or non-linear programming problem.
For the first approach the main problems are how to generate the linearisation and how good an approximation the linearisation is. For the second approach the main difficulties are concerned with the type of algorithm which is used to find the minimum of the objective function and what kind of derivative information is needed.
The examination of non-linear models does not throw any fresh light upon the problems of economic policy. But since most of the models in practical use are non-linear it is worth considering how they can be handled and how the results which are obtained can be related to the results obtained for linear systems.
Introduction: Science and scientism in microeconomics
Let “ideal science” be the reconstruction of science as predictions deduced from theory and tested by observations. In Chapter 5 we argued that classical mechanics, though science, failed as “ideal science” because of a set of attachment difficulties. In Chapter 6, with hypothetical particular utility and production functions, we argued that microeconomics failed as “ideal science” because of precisely the same set of attachment difficulties. Therefore, there exist utility and production functions which make microeconomics into a scientific research program in precisely the same way in which classical mechanics is a scientific research program.
Yet in Chapter 5 we also argued that classical mechanics is a successful scientific research program. That is, we exhibited particular force equations for various situations in the world. And we argued that in each such situation the conjunction of the particular force equation and Newton's Second Law implied a prediction statement which, ignoring the attachment difficulties, would confront and fail to be contradicted by actual observations. In Chapter 6, though, we made no claim that our hypothetical utility and production functions characterized any situation in the world or that their hypothetical prediction statements succeeded in confronting and failed to be contradicted by actual observations. Our account of microeconomics is so far one of an undeveloped, nonmature, or not yet successful scientific research program.
We began Chapter 5 by acknowledging the serious difficulties Thomas Kuhn and others have raised for Karl Popper's rationalist reconstruction of science as testing the deduced consequences of theories against observations. Nevertheless, we claimed that an account of classical mechanics along Popper's lines, together with a catalogue of the account's difficulties, would be useful in exploring the question, Is microeconomics science? We argued that since classical mechanics is science, reasons to believe that microeconomics is not science might be found among differences in the catalogues of difficulties of Popper-like reconstructions of the two theories. Our reconstruction of classical mechanics generated a catalogue of seven difficulties: Laws are not logically falsifiable; approximation conventions are needed in prediction statements; circular-like reasoning is entailed; complicated (or even intractable) equations of motion must sometimes be solved; the values of constants are not implied by the theory; the theory's realm of applicability is not always fully observable; scientists' reporting may be imperfect.
We began Chapter 6 by challenging the view that recent work in the philosophy of science offers an affirmative answer to the question, Is microeconomics science? The implicit logic of that view is incoherent: Popper's model of science fails to account for both physics and economics; physics is science, and therefore economics is science too.
The historical origin of the income maintenance experiments
Beginning in the early 1960s, the work of quantitative social scientists and applied statisticians began to play an increasing role in social policy deliberations in the United States. But their experience with the Coleman Report, which appeared in mid-1966, made many leading policy-oriented statisticians and social scientists doubt the value of large-scale observational studies (such as Coleman's) as aids in formulating social policy. Some began to argue for the statistician's classical Fisher-type controlled experiment as the needed precursor to public policy formulation.
Meanwhile, rebellions in the inner cities replaced the civil rights movement of the early 1960s. In an economic expansion driven by the Vietnam War, these rebellions triggered one traditional response of the state: an expansion of the welfare system and a stepping-up of the “War on Poverty.” But political liberals in the antipoverty programs and welfare rights groups began to argue for even more: a guaranteed income program to replace the demeaning welfare system. Economists of many political persuasions were amenable, for different reasons, to one form of guaranteed income: a negative income tax (NIT). But political conservatives in Congress balked, claiming that such a program would lead the poor to stop working.
In late 1966, Heather Ross, an M.I.T. economics doctoral candidate, working for a Washington antipoverty agency, made a proposal that eventually broke the political deadlock (Ross 1966).
Chapter 1 took classical notions of causality and probability and fused them into a concept of gambling-device law (GDL). We also argued that the proposition that a GDL governs a given situation cannot, in principle, be based solely on what is observable. On the basis of the GDL concept, some assumptions based on the physical act of randomization, and some further assumptions not baseable on the observations, Chapter 2 constructed a theory of causal point estimation for statistical analysis of data from controlled experiments. We also argued that extensions of the theory to tests of hypotheses of no causal effects and no mean causal effect entail conceptual difficulties.
Chapter 3 argued that special features of social experiments may make Chapter 2's theory of randomization-based causal point estimation inapplicable to them. We also argued that the choice of a complex experimental design and any (unnecessary) failure to account for the complexity in subsequent statistical analysis could prevent Chapter 2's theory of causal inference from applying to a social experiment's results. Chapter 4 developed a second concept of probability (not baseable solely on observation) and constructed a theory of statistical inference (in which randomization has no role to play) based on the second probability concept and other assumptions not baseable on the observations. A social experiment's statistical analysis may be based on the second theory, but if so, randomization becomes a superfluous procedure, and the relationship between the experiment's results and propositions on cause and effect becomes unclear.