To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Neoclassical economics is based on the premise that models that characterize rational, optimizing behavior also characterize actual human behavior. The same model is used as a normative definition of rational choice and a descriptive predictor of observed choice. Many of the advances in economic theory in the past 50 years have constituted clarifications of the normative model. One of the most significant of these advances was the normative theory of choice under uncertainty, expected utility theory, formulated by John von Neumann and Oskar Morgenstern (1947). Expected utility theory defined rational choice in the context of uncertainty. Because of the dual role economic theories are expected to play, expected utility theory also provided the basis for a new style of research pioneered by Maurice Allais (1953) and Daniel Ellsberg (1961). Allais and Ellsberg exploited the precision of the theory to construct crisp counterexamples of its descriptive predictions. The methods they used to demonstrate the force of their counterexamples were similar. Some prominent economists and statisticians were presented with problems to which most gave answers inconsistent with the theory. The fact that Savage was induced to violate one of his own axioms was taken to be sufficient proof that a genuine effect had been discovered.
Several studies soliciting willingness-to-pay (WTP) and willingness-to-accept (WTA) responses for a variety of goods have found a large disparity between these “buying price” and “selling price” measures of value (see Knetsch and Sinden, 1984, for a summary of these studies). Although utility theory is consistent with some disparity between them, scholars generally have argued that the empirical disparity in these responses is much larger than is expected from the theory. Indeed, the mean WTA values obtained in this way are frequently several times greater than the mean WTP values so obtained. These empirical results are very robust under investigations designed to determine the effect of monetary incentives, experience, and other factors on the disparity. These results cast serious doubt on the validity of utility (or demand) theory as a calculating, cognitive model of individual decision behavior.
Another related series of experimental results have established what is commonly referred to as the preference reversal phenomenon (see the survey by Slovic and Lichtenstein, 1983). This refers to the large proportion of subjects who report that they prefer item A to item B (or B to A) but whose WTP or WTA is smaller for A than for B (or larger for A than for B if they said they preferred B to A).
It has been more than 10 years since we published our first experimental test of economic choice theory using animal subjects (Kagel et al., 1975) and even longer since we began conducting economic choice experiments with animal subjects (1971). We continue to be engaged in experimental studies with animal subjects, extending our inquiries beyond static models of consumer choice and labor supply behavior under certainty to choices among risky alternatives (Battalio, Kagel, and MacDonald, 1985) and intertemporal choice behavior (Kagel, Green, and Caraco, 1986). Although no other economists we know of have undertaken experimental studies of animal choice behavior (i.e., with their own laboratories), there is a growing dialogue between economists and psychologists concerned with investigating economic choice theories using animal subjects, as judged by the expanding number of research proposals and working papers involving such collaborative efforts. In addition, efforts by psychologists to design and analyze animal choice experiments with economic models (e.g., Lea, 1981; Hursh, 1984) have increased, as has the use of optimization theories in biology, borrowed more or less directly from economics and operations research, to analyze the ecological behavior of animals (Maynard-Smith, 1978). At the same time there has been a virtual explosion in economics of experimental studies of market behavior using human subjects (for reviews see Smith, 1982a; Plott, 1982) and a smaller number of studies investigating individual choice behavior with human subjects.
I first began to plan an experimental study of bargaining while I was preparing a monograph (Roth, 1979) concerned with what was then (and is probably still) the most comprehensively articulated body of formal theory about bargaining in the economics literature. I am referring to the game-theoretic work that followed in the tradition begun by John Nash (1950).
A number of experiments had already investigated bargaining situations of the kind addressed by this set of theories, and some were even explicitly concerned with testing the predictions of the theory that Nash had proposed. However, none of these experiments corresponded closely to the conditions assumed by Nash's theory or measured those attributes of the bargainers that the theory predicted would influence the outcome of bargaining. This was largely because, taken literally, Nash's theory applies to bargaining under conditions unlikely to obtain in natural bargaining situations and depends on attributes of the bargainers that are difficult to measure. Specifically, Nash's theory assumes that bargainers have available to them the information contained in one another's expected utility functions (i.e., each bargainer's preferences and risk posture), and it depends on this information to generate a prediction about the outcome of bargaining. Some of the earlier experimenters had elected to examine bargaining under conditions they believed more closely approximated natural situations, and all had assumed, for the purpose of obtaining predictions from Nash's theory, that the preferences of all bargainers were identical and risk neutral.
Nonlinear least squares is the prototypical problem for establishing the consistency of nonlinear econometric estimators in the sense that the analysis abstracts easily and the abstraction covers the standard methods of estimation in econometrics: instrumental variables, two- and three-stage least squares, full information maximum likelihood, seemingly unrelated regression, M-estimators, scale-invariant M-estimators, generalized method of moments, and so on (Burguete, Gallant, and Souza 1982; Gallant and White 1986). In this chapter, nonlinear least squares is adapted to a function space setting where the estimator is regarded as a point in a function space rather than a point in a finite-dimensional, Euclidean space. Questions of identification and consistency are analyzed in this setting. Least squares retains its prototypical status: The analysis transfers directly to both the above listed methods of inference on a function space and to semi-nonparametric estimation methods. Two semi-nonparametric examples, the Fourier consumer demand system (Gallant 1981) and semi-nonparametric maximum likelihood applied to nonlinear regression with sample selection (Gallant and Nychka 1987), are used to illustrate the ideas.
Introduction
The intent of a semi-nonparametric methodology is to endow parametric inference with the nonparametric property of asymptotic validity against any true state of nature. The idea is to set forth a sequence of finite dimensional, parametric models that can approximate any true state of nature in the limit with respect to an appropriately chosen norm. As sample size increases, one progresses along this sequence of models. The method is parametric.
Economics is not engineering; yet, perhaps, we can track the economy using the same tools used to track a spacecraft, an oil tanker, or a chemical reaction. In the 25 years since the publication of the original Kalman (1960) and Kalman and Bucy (1961) papers that introduced digital filters for nonstationary problems, economists have been studying these possibilities, and the presence of the August 1985 session of the World Congress of the Econometric Society suggests that it is still a question of great interest.
The initial attempts to apply these methods to economic problems immediately faced a major difficulty. Engineers usually had quantitative theories that described the equations of motion of physical systems and were primarily interested in estimates of the “state” of the system obtained from noisy measurements. The extraction of estimates of such signals from noise was called the estimation, or “state estimation,” problem. Economists, however, knew far less about the fundamental laws of motion of economic systems and were therefore particularly interested in discovering such laws of motion from the noisy data rather than in merely estimating the state of the economy. Since the Kalman filter takes the parameters of the process as given in estimating the state, it appeared that there would be little possibility to apply such methods in economics.
Abstract This chapter reviews the asymptotic properties of the Nadaraya-Watson kernel estimator of an unknown (multivariate) regression function. Conditions are set forth for pointwise asymptotic normality and uniform weak consistency. These conditions cover the standard i.i.d. case with continuously distributed regressors, as well as the cases that the distribution of all, or some, regressors is discrete and/or the data are generated by a class of strictly stationary time series processes. Moreover, attention is paid to the problem of how the kernel and the window width should be specified. Furthermore, the estimation procedure under review is illustrated by a numerical example.
Introduction
A large extent of applied econometric research involves the specification and estimation of regression models, where in most of the cases the linear regression model is used. The most crucial assumption underlying these models is that they represent the mathematical expectation of the dependent variable conditional on the regressors, which implies that the expectation of the error term conditional on the regressors equals zero with probability 1. If the dependent variable has finite absolute first moment this conditional expectation always exists. Compare Chung (1974, Theorem 9.1.1). Therefore, regression models are either true or false in the sense that they represent conditional expectations given the regressors, or not.
In recent years macroeconomic explanations of cyclical movements in hours of work and consumption have made extensive use of the microeconomic model of life-cycle allocation. Starting with Lucas and Rapping's (1969) original work, the standard approach in macroanalysis makes use of aggregate data to estimate structural parameters of an individual's intertemporal allocation model. The results are then interpreted as those describing the “representative consumer.” At the same time that this approach draws inferences about individual behavior from aggregate data, other work asks whether microestimates of the life-cycle model can explain macroeconomic phenomena. For example, Hall (1980a) concludes that the microevidence on the intertemporal responsiveness of labor supply appears consistent with aggregate data, whereas Ashenfelter (1984) maintains it is not. In all of this research, a microeconomic model of life-cycle behavior is being interpreted in a macroeconomic setting, so it seems worthy of study to determine the conditions under which the interpretation is valid.
The purpose of this chapter is to explore the relationship between micro and aggregate specifications of the intertemporal allocation of labor supply. The empirical relevance of the life-cycle model is not addressed in this study; instead it is a maintained hypothesis that this model is an appropriate description of an individual's labor supply and consumption decisions. The objective is to construct empirical specifications for aggregate hours of work that have the life-cycle allocation model as their microeconomic foundation. It should be noted, however, that the general aggregation issues addressed here are relevant for other characterizations of an individual's decision making.
From the point of view of econometric modelling, the Kalman filter is of very little interest. It is simply a statistical algorithm that enables certain computations to be carried out for a model cast in state space form. The crucial point for the econometrician to understand is that the state space form opens up the possibility of formulating models that are much wider and richer than those normally considered. Furthermore, it often allows the setting up of models that have a more natural interpretation and provide more useful information on the nature of underlying economic processes. This second point can be illustrated clearly at the simplest level of a pure time series model. Indeed, the aim of this chapter will be to show how the state space form can be used to provide a framework for modelling economic time series that is in many ways preferable to the more conventional approach based on ARIMA processes. The proposed framework links up closely with that of dynamic econometric models, and the resulting model selection methodology is much more akin to that of econometrics. Perhaps the clearest indication of the closeness of these links is that the starting point for the proposed framework is regression rather than the theory of stationary stochastic processes.
The state space form allows unobserved components to be incorporated into a model, and the Kalman filter provides the means of estimating them. The specification of these components must, to some extent, depend on a priori considerations, and since the components presumably have an economic interpretation, the model is a structural one; see Engle (1978). In the reduced form the information on individual unobserved components is not explicitly available since the disturbances that generate the various unobserved components are amalgamated into a single disturbance term. In the case of a linear univariate structural time series model, the reduced form is an ARIMA process.
In this chapter we present a unified theory of specification testing that applies to a broad range of the data, model, and estimator configurations likely to be met in econometric practice. The abstract results are applied to obtain specification tests based on maximum-likelihood estimators for the parameters of dynamic models. We propose a dynamic information matrix test that should be useful for detecting dynamic misspecification in a wide variety of models and discuss its interpretation in a number of simple special cases. We also propose some new, computationally convenient versions of the Hausman test.
Introduction
Over the past several years there has been a substantial amount of attention directed to the consequences and detection of specification problems in econometric modeling and estimation. Although the literature is now quite extensive [see, e.g., Ruud (1984)], recent work of Newey (1985) and Tauchen (1984) has provided a single unifying framework in which most available results on specification testing can be embedded as special cases. This unification is available, strictly speaking, for the convenient and insightful context of independent: and identically distributed (i.i.d.) observations. Economic data are rarely obtained in such a way that the i.i.d. assumption is realistic. Instead, they may be quite arbitrary stochastic processes.
The purpose of this chapter is to present an extension of this unified framework for specification testing that allows for more realistic economic-data-generating processes.
To say that markets can be represented by supply and demand “curves” is no less a metaphor than to say that the west wind is the “breath of autumn's being.” (McCloskey 1983, p. 502)
When we tack a “random variable” onto a theoretical model, do we announce our faith in a supreme being, who, for reasons unknowable, endows us with deductive faculties sufficient to formulate a set of alternative hypotheses, one of which is the data-generating process that he or she has constructed to determine our fates? Is data analysis the holy sacrament through which the supreme being incrementally reveals the data-generating process to the faithful? Do we wait patiently until time infinity for the complete revelation, in the meantime forsaking all but consistent estimators?
No, I think not. Models, stochastic or otherwise, are merely metaphors. We are willing for some purposes to proceed as if the data were generated by the hypothesized model just as we are willing for other purposes to proceed as if “econometrics is a piece of cake.”
The basic conceptual error that is made by econometric theorists is their failure to recognize that in practice probabilities are metaphors. A probability metaphor is most compelling when the data come from a designed experiment with explicit randomization of the treatments. But in nonexperimental settings the probability metaphor often stretches the imagination beyond the point of comfort. Nagging but persistent doubts about the aptness of the metaphor leave us with nagging but persistent doubts about the inferences that depend on it.
Like other innovations, major ideas in economics enter the technology of economic analysis according to fairly well defined patterns, that Vernon has called “product cycles.” Consider Keynesian macromodels. The seminal study was Tinbergen's (1939) monograph for the Society of Nations, but work on this topic began in earnest just after the last war, when Klein's tiny model 1 was completed, and Tinbergen set up the rather incomplete model that was the first of a distinguished series built in the Central Planning Bureau of the Netherlands. It took about 30 years for what trade theorists would call this “Keynesian macromodels product cycle” to run its course. Today, though a number of major existing models retain their prestige, building new ones is an activity that is slightly looked down on in the academic world.
That is a sign of the success of this research. These models have established their worth in the practical world, where they are used daily, both in government and in big business. In the academic community, there continues to be considerable interest in careful investigations of such mechanisms as wage rigidity and the slowdown of the growth of productivity, better understanding of which is necessary to make Keynesian macromodels into more accurate forecasting tools but that are also so important in their own right that they must be tackled and solved irrespective of their usefulness for modeling.
Economists are at their most comfortable when analysing markets in which agents respond only to price signals. They can then call upon a mass of generally accepted theory and proceed with their analysis with the minimum of fuss. Since the labour market is one of the most important in the economy, it would make life much more straightforward if it could be convincingly demonstrated that in this market the interaction of demand and supply was essentially via the price mechanism. One part of such a demonstration would clearly have to involve the presentation of convincing evidence that labour supply fluctuations both in the short and the long run were generated, for the most part, by fluctuations in real wages. Obviously, we cannot allow the short run here to be too short; otherwise, we shall founder on the simple fact, gleaned from personal experience, that many employees work much harder in some weeks than in others without any noticeable change in their remuneration. Neither can we allow the short run to be too long; otherwise, we find ourselves unable to call upon the straightforward theory when confronted with major aggregate fluctuations. A year seems a reasonable length of time for the short run since most of us would be happy if annual fluctuations in labour supply were mainly generated by real wage shifts even if seasonal fluctuations, for example, were brought about by other means.
Abstract: This chapter is concerned with methodological aspects on specification tests, with particular emphasis on general methods for deriving the asymptotic properties of specification test statistics in the presence of nuisance parameters in the context of nonlinear models. The properties of the classical procedures are briefly recalled. A general derivation of Hausman’s specification test is given. The problem of simultaneous misspecification tests is discussed. Finally, the properties of some test statistics are examined when the model is misspecified.
Introduction
The purpose of this chapter is to present certain aspects of the theory of specification tests. At the outset, we would like to emphasize that we do not intend to cover the subject completely and the choice of material reflects to some extent our own limitations and to a great extent our own interests. In particular, we shall only consider theoretical aspects of the subject, and this may be viewed as a shortcoming of this survey.
A good part of the material contained here is already covered in other surveys [see, e.g., Engle (1984), Ruud (1984), and the comments on this chapter by Breusch and Mizon (1984), Hausman (1984), Lee (1984), and White (1984)]. In our presentation of the subject we would like to emphasize the following points:
Most of the problems of specification tests arise in the context of nonlinear models. Therefore, it is extremely useful to develop general testing procedures that are valid for large classes of nonlinear models.
The tests criteria and their statistical properties depend directly on the estimation procedure from which they are derived. In this respect, there is no possible separation between estimation theory and hypothesis testing theory for nonlinear econometric models.
In almost all testing problems considered in econometrics, null hypotheses are composite, that is, do not specify the model completely. The most common testing problems are about testing in the presence of nuisance parameters. One should explicitly incorporate the treatment of these nuisance parameters in deriving test statistics in order to assess their properties when nuisance parameters are eliminated in an appropriate way.
When deriving a new test statistic, one should try to compare it with existing ones for the same problem. There are, at least, two ways of comparing tests statistics:
(a) One can often show that the test criteria may be derived from the same set of estimating equations for a given estimation framework.
(b) One can often compare the statistical properties of test statistics to be used for the same testing problems. In the context of nonlinear econometric models, two methods of comparison seem to be particularly useful, namely the computation of the asymptotic local power and Bahadur’s approximate slope of tests statistics.
On any given policy issue, one is likely to be able to find economists offering professional opinions on all sides, many of them with quantitative models to support their opinions. Though our discipline is in places as quantitative and mathematically deep as many of the physical sciences, we do not ordinarily resolve the important policy issues even with the most difficult and intriguing of our mathematical tools. Yet economists often speak as if their models and conclusions were imprecise only in the same sense that a structural engineer's finite-element model of a beam is imprecise - the model is a finite-dimensional approximation to an infinite-dimensional ideal model, and the ideal model itself ignores certain random imperfections in the beam. The public and noneconomist users of economic advice understand that the uncertainty about an economic model is not so straightforward and therefore rightly take the professional opinions of economists who pretend otherwise with many grains of salt.
The problem is not simply that our best models are too sophisticated for the layman to understand. David Freedman (1985), a prominent statistician, has recently examined in a series of papers some actual applications of the statistical method in economics and emerged with broad and scathing criticisms. Whereas there are effective counterarguments to some of Freedman’s criticisms, they cannot be made within the classical statistical framework of most econometrics textbooks or within the profession’s conventional rhetorical style of presenting controversial opinions in the guise of assumptions supposedly drawn from “theory.” Quantitatively oriented scientists outside the social sciences who make a serious effort to understand economic research will often have Freedman's reaction.
The purpose of this chapter is to deduce asset prices in three intertemporal models. The models we consider have the following attributes:
(i) preferences of consumers aggregate in the sense of Gorman (1953);
(ii) there is a complete set of competitive markets for date- and state-contingent commodities.
These two features simplify calculation of the competitive equilibrium because aggregate equilibrium quantities can be computed by solving a single-agent (representative consumer) resource allocation problem. The second feature introduces a rich variety of securities whose equilibrium prices are to be derived. Both of these attributes are quite restrictive. Models with these attributes, however, are interesting benchmarks for other models in which fewer restrictions are imposed on preferences or in which markets are incomplete either because of limitations in communication or commitment or because of the existence of private information.
This chapter focuses on calculating asset prices for two reasons. First, it is of pedagogical value to examine environments in which asset prices have explicit representations in terms of the underlying state variables of the economy. For example, such representations help characterize the implications for asset pricing of intertemporal specifications of preferences. Second, the ability to calculate equilibrium prices facilitate the analysis of asset-pricing models using econometric methods.