To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the previous chapter we have considered tests based on the likelihood function. Such tests can be used when the model is parametric, i.e., when the family of probability distributions defining the statistical model is indexed by a finite dimensional parameter. Moreover, we have assumed that this family contains the true probability distribution having generated the observations. When such assumptions are deemed too stringent, it is necessary to apply other testing procedures.
In Section 1 we shall consider the case where the unconstrained and constrained estimators of the parameter vector θ of interest are obtained via a maximization of a general statistical criterion Ln(θ). In addition, we shall consider the case where the null hypothesis is under the so-called mixed form (See Section 10.1). The resulting framework is quite general. It contains as a special case the theory developed in the previous chapter, which relied upon the likelihood function and a null hypothesis in either implicit or explicit form. In Section 2, we shall study specification tests based on asymptotic least squares. In Section 3 we shall consider specification tests based on generalized methods of moments. These estimation methods were introduced in Chapter 9. In Section 4 we shall apply the general principle underlying Hausman tests to the present framework. Such a principle was introduced in Section 17.9 within a likelihood context. Lastly, in Section 5 we shall present the so-called information matrix test.
Tests Based on an Estimation Procedure Defined by a General Objective Function
We consider the framework of Section 10.3.1. The parameter of interest is θ ∈ θ ⊂ Rp.
As discussed in the previous chapter, on the one hand, problems of estimation and testing are defined with respect to a function of the parameters. On the other hand, methods that answer these problems are based on functions of the observations. In the present chapter, we shall focus on the “information” on a given function of the parameters that is contained in a function of the observations, i.e., in a statistic. We shall study successively the following four issues:
a) Characterize the statistics that contain all the available information regarding the parameter function of interest. These are called sufficient statistics.
b) Characterize the statistics that contain no information regarding the parameter function of interest. These are called ancillary statistics.
c) Measure the loss of information associated with a given statistic that is neither sufficient nor ancillary.
d) Characterize the functions or the values of the parameters for which it is impossible to be completely informed. This is the so-called identification problem.
In this chapter, we shall study these issues within a classical framework, i.e., within a decision theoretical framework without a prior distribution on the parameters. Similar results are established in Chapter 4 within the Bayesian and empirical Bayesian frameworks.
Sufficiency
Definition
Suppose that an entrepreneur receives an important shipment of industrial parts. The shipment contains an unknown proportion θ of defective parts. Because a systematic quality control is too expensive, the entrepreneur only checks a sample of n parts. It is assumed that each part is randomly drawn with equal probability and replacement.
The present textbook was motivated by two principles drawn from our teaching experience. First, the application of statistical methods to economic modelling has led to the development of some concepts and tools that are frequently ignored in traditional statistical textbooks. Second, it is possible to introduce these various tools, including the most sophisticated ones, without having recourse to mathematical or probabilistic concepts that are too advanced. Thus the main goal of this book is to introduce these statistical methods by taking into account the specificity of economic modelling and by avoiding a too abstract presentation.
Our first principle has various consequences. Though we discuss problems, concepts, and methods of mathematical statistics in detail, we rely on examples that essentially come from economic situations. Moreover, we analyse the characteristics and contributions of economic methods from a statistical point of view. This leads to some developments in model building but also in concepts and methods. With respect to model building, we address issues such as modelling disequilibrium, agents’ expectations, dynamic phenomena, etc. On conceptual grounds, we consider issued such as identification, coherency, exogeneity, causality, simultaneity, latent models, structural forms, reduced forms, residuals, generalized residuals, mixed hypotheses, etc. It is, however, in the field of methods that the economic specificity brings the most important consequences. For instance, this leads to focus on predictions problems, maximum likelihood or pseudo conditional maximum likelihood type estimation methods, M-estimators, moment type estimators, bayesian and recursive methods, specification tests, nonnested tests, model selection criteria, etc.
Our second principle concerns our presentation. In general, we have tried to avoid proofs that are too technical.
In the preceding chapters the main asymptotic properties of estimators and test statistics have often been derived heuristically. In this chapter we present a more rigorous proof of the consistency of these estimators as well as a more detailed derivation of their asymptotic distributions. As a prerequisite we provide some useful tools that underlie the Taylor series expansions that were used to establish, for instance, the asymptotic equivalences among the classical testing procedures.
There are several methods of proofs for establishing the consistency of an “extremum” estimator, i.e., an estimator obtained by maximizing or minimizing a statistical criterion. We have chosen to present those methods that appear to apply to the largest number of situations and for which the regularity conditions are the easiest to verify. Note that the class of extremum estimators contains the M-estimators (maximum likelihood estimator, pseudo maximum likelihood estimators, etc.) as well as moment type estimators (asymptotic least squares estimators, generalized methods of moments, etc.).
Existence of an Extremum Estimator
Before developing such an asymptotic theory it is necessary to reconsider the definition of an estimator. In Chapter 2 an estimator was defined as a mapping from the set of observations (sample space) to the set of possible values for the parameter (parameter space). In fact, since we are interested in the distribution of an estimator, in its expectation, variance, etc., then this probability distribution must have a meaning, and, for this reason, the mapping should be measurable. In particular, the sample space and the parameter space must be endowed with some σ-algebrae.