To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Often, a complete specification of a model for a particular conditional density is unavailable, either because the theory describing the relationship between the variables of interest lacks sufficient power to describe the density of interest, or because interest only attaches to certain aspects of the conditional density. Usually, both these reasons underlie the way in which a particular specification is formulated. In this chapter we explore the consequences for the QMLE of ignoring or misspecifying features of the true conditional density that are not of direct interest.
Conditional Expectations and the Linear Exponential Family
Typically, interest in economics attaches to a fairly limited range of attributes of the conditional density, such as the conditional mean or the conditional variance. Because the conditional variance can be represented as the difference between the conditional mean of the square of the random variable of interest and the square of the conditional mean of that variable, we focus in this section on the properties of the QMLE as an estimator of the parameters of the conditional mean of Ytgiven Wt. Specifically, we seek to answer the question, “Under what conditions will the QMLE provide a consistent estimate of the true parameters of a correctly specified model of the conditional mean despite misspecification of other aspects of the conditional distribution?” We also consider the issue of interpreting the QMLE when the model of the conditional mean is misspecified.
In earlier chapters we have seen that under general conditions the QMLE tends stochastically to θ*, a parameter vector that minimizes the average Kullback-Leibler distance of the specification ft from ht, the conditional density ratio of the dependent variables Ytgiven the explanatory variables Wt. In Chapter 6, we saw that the QMLE has a normal distribution asymptotically under general conditions, centered at θn* and with a particular covariance matrix.
In many cases, it is possible to construct a variety of such well behaved estimators for some sequence θ* = {θ* *}. (Such estimators may or may not be QMLE's as defined here.) This is essentially always true in situations for which it is possible to construct a model that is correctly specified at least to some extent. Given this possibility, it is important to have some appropriate means of comparing the relative performance of alternative estimators for θ*, and to ask whether there is a way of estimating θ that is "best" in this appropriate sense under specific conditions.
The purpose of this chapter is to address these issues. We first consider asymptotic efficiency for models correctly specified in their entirety, using an approach of Bahadur [1964]. We then consider the relation between efficiency and exogeneity, and efficient estimation using linear exponential models.
The use of sophisticated statistical techniques to estimate the parameters of carefully formulated economic models is today the standard practice in the attempt to learn about and from observed economic phenomena. Of course, this has not always been so. The present state of economic and econometric practice owes a very great deal of its rigor, coherence and elegance to the pioneering work of Frisch, Haavelmo, Tinbergen, Koopmans and Marschak, among others.
Of these pioneers, Haavelmo [1944] is responsible for providing the first comprehensive enunciation of the modern parametric approach to empirical economic analysis in his classic monograph The Probability Approach in Econometrics, which appeared as a supplement to Econometrica. There Haavelmo persuasively argued that the ad hoc correlation analyses and curve fitting techniques then prevalent should be replaced by modern statistical estimation and inference methods applied to carefully formulated probability models. Haavelmo showed how consideration of underlying economic principles, leading to mathematical relations embodying the economic model, together with consideration of difficulties of measurement and observation lead ultimately to a well-formulated parametric probability model. Haavelmo argued that appropriate statistical techniques, such as the method of maximum likelihood, should then be used for purposes of estimation, inference and prediction.
In earlier chapters we have considered at some length the consequences of correct specification and misspecification. In this chapter, we consider statistical methods for detecting the presence of misspecification.
Specific methods for detecting misspecification are based on the contrasting consequences of correct specification and misspecification. For example, when a model is correctly specified there are usually numerous consistent estimators for the parameters of interest (e.g., ordinary least squares and weighted least squares). If the model is correctly specified, these different estimators should have similar values asymptotically. If these values are not sufficiently similar, then the model is not correctly specified. This reasoning forms the basis for the Hausman [1978] test, a special case of which we encountered in the previous chapter. Such tests have power because of the divergence of alternative estimators under misspecification. As another example, correct specification implies the validity of the information matrix equality. If estimators for -A*n and B*n are not sufficiently similar, then one has empirical evidence against the validity of the information matrix equality and thus against the correctness of the model specification. This reasoning forms the basis for the information matrix tests (White [1982, 1987]). Such tests have power because of the failure of the information matrix equality under misspecification.
The costs of children can be seen as the additional expenditure needed by a household with children to restore its standard of living to what it would have been without them. To implement this, one might think of comparing the expenditures of two households, one with and one without children, yet sharing a common level of welfare. As documented in a number of studies in this volume, the difficulty in this is in finding a criterion which might allow one to identify when two households of different composition are at a common living standard. While economic analysis of expenditure behaviour can provide important information on the way household spending patterns change in response to demographic change, it cannot identify preferences over composition itself and cannot identify costs of children without making assumptions about these preferences (see Pollak and Wales, 1979; Blackorby and Donaldson, 1991, and chapter 2 in this volume; Blundell and Lewbel, 1991, for example). We argue below that the placing of welfare measurement in an intertemporal context widens the set of parameters that we can identify and clarifies the nature of the welfare information that cannot be recovered from consumer behaviour. Quite simply, consumption changes over time following a change in demographic structure probably come closest to reflecting the consumption costs of children.
If intertemporal substitution responses are allowed for then the usual practice of measuring costs by concentrating on effects of children upon the within-period composition of spending seems unappealing.
There is continuing debate about how the income distribution has changed over the last decade or so. The arguments in the UK are not so much about whether overall inequality and average real income have increased, since most people agree that they have. The controversy is about whether everyone has benefited from increases in overall living standards, and the size of the changes. How have families with children fared relative to those without children? Are the income changes large or small? In this chapter we provide new answers to such questions, analysing changes in family fortunes in the UK between 1971 and 1986.
Our results also illustrate a more general methodological point: that it is important to have high quality, consistently defined, data for studying income trends. What may at first sight appear to be minor and obvious points about empirical definitions can have a significant impact on the conclusions of interest. Our research shows why official UK income statistics can be unreliable sources about trends in family fortunes. Since most commentators do nevertheless rely on official sources, it is important to document the deficiencies of these series.
Our point about the importance of data quality has been made before of course, but recent empirical work with UK household micro data has focused on the incomes of poor people only. In this chapter we address several new issues.
First we analyse the changing fortunes of middle and high income people as well as low income people.
Microeconomic theory essentially considers the household as the basic decision unit. The usual tools of consumer theory have been applied at the household level; in particular, the latter has been described by a single utility function which is maximised over a single budget constraint. This ‘traditional’ framework, however, has recently been challenged by several authors, who have developed so-called ‘collective’ models of household behaviour. The various contributions belonging to the collective approach share a fundamental claim, namely that a household should be described as a group of individuals, each of whom is characterised by particular preferences, and among whom a collective decision process takes place. The first objective of this chapter is to discuss some basic methodological issues involved in the collective approaches. A second objective is to review a particular class of collective models, based upon the Pareto efficiency hypothesis, that have recently been developed. Finally, the collective approach has important consequences for the measurement – and, as a matter of fact, for the very definition – of household welfare. This issue will be discussed in the final section of the chapter.
Models of household behaviour: some methodological issues
‘Collective’ versus ‘traditional’ approaches
The advantages of the traditional approach are well known. Essentially, the traditional setting allows a direct utilisation of the consumer theory toolbox. This includes generating testable restrictions upon demand functions, recovering preferences from observed behaviour in an unambiguous way, and providing an interpretation to empirical results.
The measurement of individual and household welfare stands out in applied economics for its ability to usefully blend economic theory with empirical practice. It is an area where empirical investigation clearly benefits from theoretical insight and where theoretical concepts are brought alive and appropriately focussed by the discipline of empirical relevance and policy design. There are difficult issues to face in identifying who gains and who loses from complex policy reforms. Potential Pareto improvements are scarce and the scope for useful policy recommendations may well be limited unless one is prepared to go further, attempting to evaluate the sizes of the gains and losses to assess whether, in some sense, the gains outweigh the losses.
A wide variety of empirical work has attempted to measure the impact of policy changes on the behaviour and living standards of individuals. This kind of work has flourished in recent years with the increased availability of large micro datasets and significant decreases in the costs of analysing such data using microeconometric methods. The aim of this book is to complement the existing literature by concentrating on the issues that are highlighted in empirical applications.
Earlier empirical work was based on estimated behavioural models obtained using aggregate data. However, this was limited insofar as it, at least implicitly, imposed the conditions required to be able to infer individual behaviour from aggregate data. Important contributions by Muellbauer (1975, 1976), by Jorgenson, Lau and Stoker (1980) and by Jorgenson (1990), building on the pioneering work of Gorman (1953, 1981) established exact conditions under which it is possible to make such inferences from aggregate data.
The best-known measures of welfare changes are the compensating and equivalent variations of Hicks. For a single individual they have the advantage of being monetary measures of welfare change that are also exact. That is, if a positive compensating variation is associated with a project, this indicates that the consumer's utility has gone up because of it. Because these measures are in monetary terms there is a natural temptation to sum them in order to evaluate potential projects. Unfortunately, Boadway (1974) showed that, in a competitive economy, the sum of compensating variations is always nonnegative (positive if prices change) for all possible projects, rendering it an inappropriate tool for project analysis. Roberts (1980), and later Blackorby and Donaldson (1985) extended this result by demonstrating that knowledge of all compensating variations associated with various projects could only be used in Pareto-consistent fashion in rare circumstances even if all consumers face the same prices – a representative consumer would have to exist – and never if consumers face individual prices. The limited usefulness of these surpluses has led more recently to the use of another monetary measure of utility: the equivalent income function. This is the minimum expenditure needed to bring a consumer to a given level of utility at some pre-specified reference prices. Having computed these equivalent income functions, a planner can analyse the worthiness of a project by means of a social welfare function defined on them; see, for example, King (1983a).