To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
First used by the zoologist Sewall Wright (1925), the method of instrumental variables (IVs) was initially given a formal development by Reiersøl (1941, 1945) and Geary (1949). It is now one of the most useful and widely applied estimation methods of modern econometrics. Major contributions to the early development of this method in econometrics are those of Theil (1953), Basmann (1957), and Sargan (1958) for systems of linear simultaneous equations with independent identically distributed (i.i.d.) errors. A major concern of this development was to make efficient use of the available instrumental variables by finding instrumental variables estimators that have minimal asymptotic variance.
This concern is also evident in the subsequent work of Zellner and Theil (1962), Brundy and Jorgenson (1974), and Sargan (1964), who consider contemporaneous correlation in linear systems; Sargan (1959), Amemiya (1966), Fair (1970), Hansen and Sargent (1982), and Hayashi and Sims (1983), who consider specific forms of serial correlation for errors of linear equations or systems of equations; and Amemiya (1983), Bowden and Turkington (1985), and White (1984, Chapter 7), who consider specific forms of heteroscedasticity. For systems of linear equations White (1984, Chapter 4; 1986) considers general forms of nonsphericality in an instrumental variables context. Investigation of the properties of instrumental variables estimators in nonlinear contexts was undertaken in seminal work by Amemiya (1974, 1977) for single equations and for systems of nonlinear equations with contemporaneous correlation.
In order to explain the cyclical behavior of investment, the single–period neoclassical model of the firm has been revised to include nonlinear and time–dependent investment good production technologies. It has been observed that the shadow price of capital varies considerably over the business cycle and that the price of investment goods differs substantially from measures of the shadow price of capital. These macroeconomic phenomena have prompted modification of the model of the firm to include costs of adjustment of the capital stock and capital stock vintage effects. Both the cost–of–adjustment approach as proposed by Lucas (1967) and the vintage model [the “time–to–build” approach of Kydland and Prescott (1982)] give rise to a system of dynamic factor demands that allow for a richer class of investment behavior than static models. The object of this chapter is to examine the competing vintage and cost–of–adjustment models using U.S. production data. A new statistical methodology for choosing among nonnested models of optimizing behavior is developed and applied to this critical investment problem.
Empirical work on the derived demand for capital and other factors of production has focused on one–period models of the firm. A wide variety of functional forms have been proposed for production and cost functions. The translog form [see Berndt and Wood (1975) for production applications], the miniflex Laurent form proposed by Barnett (1983), and the Fourier flexible form proposed by Gallant (1982) have all been applied within the context of static models of the firm.
For some time now, it has been known that homothetic functional separability, the consistency of sequential optimization, the existence of input price or quantity aggregates, and the equality of certain substitution elasticities involve similar, if not identical, restrictions on the underlying production or utility function. Because functional separability has such important implications, therefore, separability restrictions have been the focus of a variety of empirical studies.
The empirical and theoretical literature on functional separability in production or cost functions to date has been almost exclusively presented in the context of cases in which all inputs or commodities are hypothesized to adjust instantaneously to their long–run or full equilibrium levels. By contrast, in recent work on factor demand models it has become increasingly common to assume instead that in the short run certain inputs (such as capital plant and equipment) are fixed but that in the long run these quasi–fixed inputs are variable.
At least since the work of Viner (1931), it has been known that the firm's long–run average total–cost (LRATC) curve can be constructed as the envelope of tangencies with short–run average total–cost (SRATC) curves. In this chapter attention is focused on the following issues: If functional separability holds on the LRATC function, what restrictions are implied on the SRATC function? Similarly, what does functional separability on the SRATC function imply for the corresponding LRATC function? In brief, the envelope consistency of functional separability restrictions is examined in the context of cost functions.
Available theoretical models of unemployment and job duration are based on a dynamic formulation of an individual worker's job search and job–matching problems. In general versions of the theory, the individual anticipates future arrival of an uncertain sequence of employment opportunities, whether currently employed or not. The worker controls the transition process between employment states and jobs by choosing search and acceptance strategies that maximize expected wealth given current information. In other words, the strategy is the solution to a well–defined dynamic programming problem.
Given a model of this type, the properties of the probability distributions of both the length of time spent looking for an acceptable job while unemployed and, once employed, the length of the specific job spell are endogenously determined by the optimal strategy and the structure of the decision problem. Hence, observations on the completed unemployment and job spell lengths experienced by a sample of workers provide information about the problem's structure. The purpose of this chapter is to develop methods suggested by the theory that an econometrician might find useful for the purpose of estimating structural parameters from available observations on realized unemployment and job spell lengths and to test their potential usefulness using Monte Carlo techniques.
Although structural models of unemployment and job spell duration have been available for some time, there are few attempts to estimate them in the literature. Instead, ad hoc specifications of the duration hazards borrowed from the statistical literature on survival and reliability analysis are estimated.
A major methodological revolution is going on in the physical sciences at the present time as a result of dramatic recent advances in nonlinear dynamics. During the past 10 years, the mathematics and physics literature on strange attractors, bifurcation theory, and deterministic chaos have acquired growing capabilities in many fields. Most of the major advances in these areas were produced through numerical iteration on nonlinear deterministic dynamical recursions and required the use of computers. Computing power is especially important in the continuous–time applications. Hence, the fact that most such advances occurred within the past 10 years is not surprising. Examples of the recent accomplishments of that literature include the production of deterministic explanations for brain wave behavior, memory retrieval, turbulence in fluids, insect population behavior, thermal convection dynamics, climatic behavior over centuries of data, chemically reacting systems, buckling beams, sunspot activity, nonlinear wave interactions in plasmas, solid–state physics, lasers, selfgeneration of the earth's magnetic field, magnetohydrodynamic flow, andmany other such phenomena that previously had been viewed as inherently stochastic and beyond existing theoretical modeling capabilities. The applications have been both theoretical and empirical. Although the first definitive experimental observations of deterministic chaos in nature were produced by physicists only 3 years ago (in a number of papers published in 1983), it nevertheless already has become clear that low–dimensional chaos is common in nature.
Virtually without exception, inference in dynamic econometric models of aggregate economic time series data is based on the asymptotic sampling–theoretic distribution of estimators, often maximum–likelihood estimators. There is no population for which sets of observations of aggregate economic time series can be drawn repeatedly, as may be the case with longitudinal or panel data. Sampling–theoretic results for independent populations have been extended to inference from a single realization of a time series, but the length of periods for which parametric models of economic time series may reasonably be regarded as stable usually limit attention to modest departures from the linear model. There is no way to incorporate carefully the inequality restrictions that often emerge from economic theory. Improvements on asymptotic theory, especially for highly nonlinear estimators, are arduous theoretical tasks whose occasional completion seems to have had little effect on the way applied work is carried out.
This situation and recent drastic reductions in computing costs suggest that alternatives to this standard approach to inference for economic time series be contemplated. This chapter explores a formal numerical Bayesian approach with diffuse priors, relying on cheap computing to disregard the analytical intractability of useful priors and functions of the parameters of interest. The ability to select diffuse priors arbitrarily means that inequality restrictions can be imposed, and the power to choose functions of interest for their bearing on empirical questions means that issues relegated to equivocal discussions can be treated formally.
Applications of forms of control theory to economic policy making have been studied by Theil (1958), Chow (1975, 1981), and Prescott (1972). Many of the applications are approximations to the optimal policy – suggestions of how to improve existing practice using quantitative methods rather than development of fully optimal policies. Chow (1975) obtains the fully optimal feedback control policy for linear systems with known coefficients for a quadratic loss function and a finite time horizon. Chow (1981) argues that the use of control technique for the evaluation of economic policies is possible and essential under rational expectations. The use of optimal control for microeconomic planning is fully established. An early analysis with many practical suggestions is Theil (1958). Optimal control theory has also been useful in economic theory, in analyzing the growth of economies as well as the behavior over time of economic agents.
The problem of control of a stochastic economic system with unknown parameters is far less well studied. Zellner (1971, Chapter 11) studied the two–period control problem for a normal regression process with a conjugate prior and quadratic loss function.
The contents of this volume comprise the proceedings of a conference held at the IC Institute at the University of Texas at Austin on May 22–3, 1986. The conference was entitled “Dynamic Econometric Modeling,” and was organized to bring together presentations of some of the fundamental new research that has begun to appear in the areas of dynamic structural modeling, time series modeling, nonparametric inference, and chaotic attractor inference. These areas of research have in common a movement away from the use of static linear structural models in econometrics.
The conference that produced this proceedings volume is the third in a new conference series, called International Symposia in Economic Theory and Econometrics. The proceedings series is under the general editorship of William A. Barnett. Individual volumes in the series will often have co–editors, and the series has a permanent Board of Advisory Editors. The symposia in the series are sponsored by the IC Institute at the University of Texas at Austin and are cosponsored by the RGK Foundation.
This third conference also was cosponsored by the Federal Reserve Bank of Dallas and by the Department of Economics, Department of Finance, Graduate School of Business, and Center for Statistical Sciences at the University of Texas at Austin. The first conference in the series was co–organized by William A. Barnett and Ronald Gallant, who also co–edited the proceedings volume.
In the last few years a growing concern over the phenomenon of the shadow (or hidden) economy has arisen, and as a consequence this topic has received increased attention among public officials, politicians, and social scientists. For the United States, like many other industrial countries, there are several important reasons why politicians and the public in general should be concerned about the growth and size of the shadow economy. Among the most important of these are:
If an increase in the size of the shadow economy is mainly caused by a rise in the tax burden, an increased tax rate may lead to a decrease in tax receipts and thus further increase the budget deficit.
If economic policy measures are based on mistaken “officially measured” indicators (such as unemployment), these measures may be at least of a wrong magnitude. In such a situation a prospering shadow economy may cause a severe problem for political decision makers because it leads to quite unreliable officially measured indicators, so that even the direction of intended policy measures may be questionable.
The rise of the shadow economy can be seen as a reaction of individuals to their overburdening by state activities (such as high taxes and an increasing number of state regulations).
Time series analysts (often “econometricians”) working in institutions involved with economic policy making or short–term economic analysis face two important professional demands: forecasting and unobserved components estimation (including seasonal adjustment). Estimation of unobserved components is overwhelmingly done in practice by using ad hoc filters; the most popular example is estimation of the seasonally adjusted series with the Xll or Xll ARIMA program.
Concerning forecasting, the decade of the seventies witnessed the proliferation of autoregressive integrated moving–average (ARIMA) models (Box and Jenkins 1970), which seemed to capture well the evolution of many series. Since this evolution is related to the presence of trend, seasonal, and noise variation, the possibility of using ARIMA models in the context of unobserved components was soon recognized. Since the early work of Grether and Nerlove (1970) on stationary series, several approaches have been suggested. I shall concentrate on one that is becoming, in my opinion, a powerful statistical tool in applied time series work [starting references are Cleveland and Tiao (1976) and Box, Hillmer, and Tiao (1978); more recent references are Bell and Hillmer (1984) and Maravall and Pierce (1987)]. In the context of an application related to the control of the Spanish money supply, I shall address the issues of model specification, estimation of the components, diagnostic checking of the results, and inference drawing.
The initial discussion between the decision analyst and the decision-maker is pivotal in determining whether the analyst will get a chance to “practice his wares.” During this discussion the analyst must ascertain the nature of the decision, the concerns of the decision-maker, and the problems that he is likely to encounter during the course of the analysis. By the end of the discussion the analyst must have described an analytic process that assures the decision-maker that his concerns will be addressed. During the course of the discussion the analyst must tailor his approach to the decision-maker and his organization, the nature of the decision, and any potential problems. This tailored process must convince the decision-maker that the process will be worth his while. In a nutshell the analyst must convince the decision-maker that the analytic process can be expected to produce new insights into his decision that will be worth the time and money being spent by the decision-maker.
The analyst must always remember that it is these insights (as to which alternative best addresses the decision-maker's uncertainties and values) that the analyst has to offer. And it is the analytic process (in which the decision analysis paradigm is imbedded) that will produce these insights.
In following through the discussion of decision theory that we presented in Chapter 3 the reader will have become aware that it is concerned with how an individual person might reasonably make decisions. Most significant decisions, however, are not made by an individual acting alone but by groups of people. Many examples spring to mind: committee decisions determined by a majority vote; the results of the adversarial process in a law court; or the subtle combination of political pressures that determine events in government. It is also the case that most practical applications of decision analysis are carried out in an organizational setting and focus on organizational decision-making rather than on improving the decision processes of a single decision-maker. It is most important, then, to ask how a normative theory developed to guide individuals can have a role to play in organizational decision-making. It clearly does have such a role in practice; we must ask if that role is valid, or simply bogus in that it does not lead to better decisions.
To do this we look first at what is known about how decisions are in fact made in organizations. The literature on this is diverse and enormous, and several different intellectual traditions have contributed to our understanding.
During the four years that have elapsed from the initial concept to the publication of this book, we have received helpful advice from many people. We want to record our particular thanks to a number of our colleagues who have read part or all of the manuscript, and given constructive criticism. These are, in alphabetical order, Val Belton, Terry Bresnick, Rex Brown, Sue Brownlow, Nick Butterworth, Elizabeth Collins, Baruch Fischhoff, Vivien Fleck, Anthony Freeling, Gary Frisvold, Elizabeth Garnsey, Colin Gill, Lisa Griesen, Kenneth Kuskey, Warner North, James Onken, and Jake Ulvila. In recording our thanks, however, we also want to state that any inaccuracies in our text are our responsibility alone, and, if readers find any, we would be grateful to learn of them. Many authors and publishers have kindly granted us permission to quote from their works, and we would also like to record our thanks to all these. In addition we wish to thank Richard Lund, who provided the photograph used to reproduce Figure 8.25.
Readers will find a guide to the book – what it contains, and how to use it — in Chapter 1. We do not mention there the use of the exercises in chapters 3, 4, and 7, however.